00:00:00.001 Started by upstream project "autotest-per-patch" build number 132372 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.255 > git --version # 'git version 2.39.2' 00:00:00.255 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.506 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.516 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.525 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.525 > git config core.sparsecheckout # timeout=10 00:00:07.535 > git read-tree -mu HEAD # timeout=10 00:00:07.549 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.566 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.566 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.643 [Pipeline] Start of Pipeline 00:00:07.659 [Pipeline] library 00:00:07.660 Loading library shm_lib@master 00:00:07.660 Library shm_lib@master is cached. Copying from home. 00:00:07.675 [Pipeline] node 00:00:07.684 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.686 [Pipeline] { 00:00:07.695 [Pipeline] catchError 00:00:07.696 [Pipeline] { 00:00:07.706 [Pipeline] wrap 00:00:07.715 [Pipeline] { 00:00:07.722 [Pipeline] stage 00:00:07.724 [Pipeline] { (Prologue) 00:00:07.919 [Pipeline] sh 00:00:08.208 + logger -p user.info -t JENKINS-CI 00:00:08.223 [Pipeline] echo 00:00:08.224 Node: CYP9 00:00:08.231 [Pipeline] sh 00:00:08.528 [Pipeline] setCustomBuildProperty 00:00:08.537 [Pipeline] echo 00:00:08.538 Cleanup processes 00:00:08.543 [Pipeline] sh 00:00:08.824 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.824 1723743 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.835 [Pipeline] sh 00:00:09.119 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.119 ++ grep -v 'sudo pgrep' 00:00:09.119 ++ awk '{print $1}' 00:00:09.119 + sudo kill -9 00:00:09.119 + true 00:00:09.132 [Pipeline] cleanWs 00:00:09.142 [WS-CLEANUP] Deleting project workspace... 00:00:09.142 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.148 [WS-CLEANUP] done 00:00:09.151 [Pipeline] setCustomBuildProperty 00:00:09.160 [Pipeline] sh 00:00:09.442 + sudo git config --global --replace-all safe.directory '*' 00:00:09.530 [Pipeline] httpRequest 00:00:10.099 [Pipeline] echo 00:00:10.101 Sorcerer 10.211.164.20 is alive 00:00:10.113 [Pipeline] retry 00:00:10.115 [Pipeline] { 00:00:10.129 [Pipeline] httpRequest 00:00:10.134 HttpMethod: GET 00:00:10.135 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.135 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.144 Response Code: HTTP/1.1 200 OK 00:00:10.145 Success: Status code 200 is in the accepted range: 200,404 00:00:10.146 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.738 [Pipeline] } 00:00:19.756 [Pipeline] // retry 00:00:19.763 [Pipeline] sh 00:00:20.051 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.066 [Pipeline] httpRequest 00:00:20.465 [Pipeline] echo 00:00:20.466 Sorcerer 10.211.164.20 is alive 00:00:20.475 [Pipeline] retry 00:00:20.477 [Pipeline] { 00:00:20.491 [Pipeline] httpRequest 00:00:20.495 HttpMethod: GET 00:00:20.495 URL: http://10.211.164.20/packages/spdk_a25b161983cb186b61df1680bed188e45c455b9c.tar.gz 00:00:20.496 Sending request to url: http://10.211.164.20/packages/spdk_a25b161983cb186b61df1680bed188e45c455b9c.tar.gz 00:00:20.523 Response Code: HTTP/1.1 200 OK 00:00:20.523 Success: Status code 200 is in the accepted range: 200,404 00:00:20.523 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a25b161983cb186b61df1680bed188e45c455b9c.tar.gz 00:02:30.676 [Pipeline] } 00:02:30.693 [Pipeline] // retry 00:02:30.700 [Pipeline] sh 00:02:30.988 + tar --no-same-owner -xf spdk_a25b161983cb186b61df1680bed188e45c455b9c.tar.gz 00:02:34.301 [Pipeline] sh 00:02:34.636 + git -C spdk log --oneline -n5 00:02:34.636 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:02:34.636 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:02:34.636 ace52fb4b test/nvme/xnvme: Tidy the test suite 00:02:34.636 46fd068fc test/nvme/xnvme: Add io_uring_cmd 00:02:34.636 4d3e9954d test/nvme/xnvme: Add different io patterns 00:02:34.647 [Pipeline] } 00:02:34.660 [Pipeline] // stage 00:02:34.669 [Pipeline] stage 00:02:34.671 [Pipeline] { (Prepare) 00:02:34.687 [Pipeline] writeFile 00:02:34.701 [Pipeline] sh 00:02:34.988 + logger -p user.info -t JENKINS-CI 00:02:35.001 [Pipeline] sh 00:02:35.286 + logger -p user.info -t JENKINS-CI 00:02:35.303 [Pipeline] sh 00:02:35.589 + cat autorun-spdk.conf 00:02:35.589 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.589 SPDK_TEST_NVMF=1 00:02:35.589 SPDK_TEST_NVME_CLI=1 00:02:35.589 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.589 SPDK_TEST_NVMF_NICS=e810 00:02:35.589 SPDK_TEST_VFIOUSER=1 00:02:35.589 SPDK_RUN_UBSAN=1 00:02:35.589 NET_TYPE=phy 00:02:35.597 RUN_NIGHTLY=0 00:02:35.601 [Pipeline] readFile 00:02:35.622 [Pipeline] withEnv 00:02:35.624 [Pipeline] { 00:02:35.635 [Pipeline] sh 00:02:35.922 + set -ex 00:02:35.922 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:35.922 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:35.922 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.922 ++ SPDK_TEST_NVMF=1 00:02:35.922 ++ SPDK_TEST_NVME_CLI=1 00:02:35.922 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.922 ++ SPDK_TEST_NVMF_NICS=e810 00:02:35.922 ++ SPDK_TEST_VFIOUSER=1 00:02:35.922 ++ SPDK_RUN_UBSAN=1 00:02:35.922 ++ NET_TYPE=phy 00:02:35.922 ++ RUN_NIGHTLY=0 00:02:35.922 + case $SPDK_TEST_NVMF_NICS in 00:02:35.922 + DRIVERS=ice 00:02:35.922 + [[ tcp == \r\d\m\a ]] 00:02:35.922 + [[ -n ice ]] 00:02:35.922 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:35.922 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:35.922 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:35.922 rmmod: ERROR: Module irdma is not currently loaded 00:02:35.922 rmmod: ERROR: Module i40iw is not currently loaded 00:02:35.922 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:35.922 + true 00:02:35.922 + for D in $DRIVERS 00:02:35.922 + sudo modprobe ice 00:02:35.922 + exit 0 00:02:35.932 [Pipeline] } 00:02:35.946 [Pipeline] // withEnv 00:02:35.950 [Pipeline] } 00:02:35.963 [Pipeline] // stage 00:02:35.972 [Pipeline] catchError 00:02:35.974 [Pipeline] { 00:02:35.986 [Pipeline] timeout 00:02:35.986 Timeout set to expire in 1 hr 0 min 00:02:35.988 [Pipeline] { 00:02:36.001 [Pipeline] stage 00:02:36.003 [Pipeline] { (Tests) 00:02:36.015 [Pipeline] sh 00:02:36.303 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:36.303 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:36.303 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:36.303 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:36.303 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.303 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:36.303 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:36.303 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:36.303 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:36.303 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:36.303 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:36.303 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:36.303 + source /etc/os-release 00:02:36.303 ++ NAME='Fedora Linux' 00:02:36.303 ++ VERSION='39 (Cloud Edition)' 00:02:36.303 ++ ID=fedora 00:02:36.303 ++ VERSION_ID=39 00:02:36.303 ++ VERSION_CODENAME= 00:02:36.303 ++ PLATFORM_ID=platform:f39 00:02:36.303 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:36.303 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:36.303 ++ LOGO=fedora-logo-icon 00:02:36.303 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:36.303 ++ HOME_URL=https://fedoraproject.org/ 00:02:36.303 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:36.303 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:36.303 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:36.303 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:36.303 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:36.303 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:36.303 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:36.303 ++ SUPPORT_END=2024-11-12 00:02:36.303 ++ VARIANT='Cloud Edition' 00:02:36.303 ++ VARIANT_ID=cloud 00:02:36.303 + uname -a 00:02:36.303 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:36.303 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:39.603 Hugepages 00:02:39.603 node hugesize free / total 00:02:39.603 node0 1048576kB 0 / 0 00:02:39.603 node0 2048kB 0 / 0 00:02:39.603 node1 1048576kB 0 / 0 00:02:39.603 node1 2048kB 0 / 0 00:02:39.603 00:02:39.603 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:39.603 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:39.603 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:39.603 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:39.603 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:39.603 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:39.603 + rm -f /tmp/spdk-ld-path 00:02:39.603 + source autorun-spdk.conf 00:02:39.603 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.603 ++ SPDK_TEST_NVMF=1 00:02:39.603 ++ SPDK_TEST_NVME_CLI=1 00:02:39.603 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:39.603 ++ SPDK_TEST_NVMF_NICS=e810 00:02:39.603 ++ SPDK_TEST_VFIOUSER=1 00:02:39.603 ++ SPDK_RUN_UBSAN=1 00:02:39.603 ++ NET_TYPE=phy 00:02:39.603 ++ RUN_NIGHTLY=0 00:02:39.603 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:39.603 + [[ -n '' ]] 00:02:39.603 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.603 + for M in /var/spdk/build-*-manifest.txt 00:02:39.603 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:39.603 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:39.603 + for M in /var/spdk/build-*-manifest.txt 00:02:39.603 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:39.603 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:39.603 + for M in /var/spdk/build-*-manifest.txt 00:02:39.603 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:39.603 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:39.603 ++ uname 00:02:39.603 + [[ Linux == \L\i\n\u\x ]] 00:02:39.603 + sudo dmesg -T 00:02:39.603 + sudo dmesg --clear 00:02:39.603 + dmesg_pid=1725309 00:02:39.603 + [[ Fedora Linux == FreeBSD ]] 00:02:39.603 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:39.603 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:39.603 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:39.603 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:39.603 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:39.603 + [[ -x /usr/src/fio-static/fio ]] 00:02:39.603 + export FIO_BIN=/usr/src/fio-static/fio 00:02:39.603 + FIO_BIN=/usr/src/fio-static/fio 00:02:39.603 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:39.603 + sudo dmesg -Tw 00:02:39.603 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:39.603 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:39.603 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:39.603 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:39.603 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:39.603 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:39.603 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:39.603 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.603 10:20:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:39.604 10:20:11 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:39.604 10:20:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:39.604 10:20:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:39.604 10:20:11 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.865 10:20:12 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:39.865 10:20:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.865 10:20:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:39.865 10:20:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:39.865 10:20:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.865 10:20:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.865 10:20:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.865 10:20:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.865 10:20:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.865 10:20:12 -- paths/export.sh@5 -- $ export PATH 00:02:39.865 10:20:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.865 10:20:12 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:39.865 10:20:12 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:39.865 10:20:12 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732094412.XXXXXX 00:02:39.865 10:20:12 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732094412.N4puXR 00:02:39.865 10:20:12 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:39.865 10:20:12 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:39.865 10:20:12 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:39.865 10:20:12 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:39.865 10:20:12 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:39.865 10:20:12 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:39.865 10:20:12 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:39.865 10:20:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.865 10:20:12 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:39.865 10:20:12 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:39.865 10:20:12 -- pm/common@17 -- $ local monitor 00:02:39.865 10:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.865 10:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.865 10:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.865 10:20:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.865 10:20:12 -- pm/common@21 -- $ date +%s 00:02:39.865 10:20:12 -- pm/common@21 -- $ date +%s 00:02:39.865 10:20:12 -- pm/common@25 -- $ sleep 1 00:02:39.865 10:20:12 -- pm/common@21 -- $ date +%s 00:02:39.865 10:20:12 -- pm/common@21 -- $ date +%s 00:02:39.865 10:20:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094412 00:02:39.865 10:20:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094412 00:02:39.865 10:20:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094412 00:02:39.865 10:20:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094412 00:02:39.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094412_collect-cpu-load.pm.log 00:02:39.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094412_collect-vmstat.pm.log 00:02:39.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094412_collect-cpu-temp.pm.log 00:02:39.865 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094412_collect-bmc-pm.bmc.pm.log 00:02:40.807 10:20:13 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:40.807 10:20:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:40.807 10:20:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:40.807 10:20:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.807 10:20:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:40.807 Wed Nov 20 09:20:13 AM UTC 2024 00:02:40.807 10:20:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:40.807 v25.01-pre-210-ga25b16198 00:02:40.807 10:20:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:40.807 10:20:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:40.807 10:20:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:40.807 10:20:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.807 10:20:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.807 10:20:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.807 ************************************ 00:02:40.807 START TEST ubsan 00:02:40.807 ************************************ 00:02:40.807 10:20:13 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:40.807 using ubsan 00:02:40.807 00:02:40.807 real 0m0.001s 00:02:40.807 user 0m0.001s 00:02:40.807 sys 0m0.000s 00:02:40.807 10:20:13 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.807 10:20:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.807 ************************************ 00:02:40.807 END TEST ubsan 00:02:40.807 ************************************ 00:02:41.068 10:20:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:41.068 10:20:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.068 10:20:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.068 10:20:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:41.068 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:41.068 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:41.640 Using 'verbs' RDMA provider 00:02:57.489 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:09.715 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:10.237 Creating mk/config.mk...done. 00:03:10.237 Creating mk/cc.flags.mk...done. 00:03:10.237 Type 'make' to build. 00:03:10.237 10:20:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:10.237 10:20:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:10.237 10:20:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:10.237 10:20:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.237 ************************************ 00:03:10.237 START TEST make 00:03:10.237 ************************************ 00:03:10.497 10:20:42 make -- common/autotest_common.sh@1129 -- $ make -j144 00:03:10.757 make[1]: Nothing to be done for 'all'. 00:03:12.142 The Meson build system 00:03:12.142 Version: 1.5.0 00:03:12.142 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:12.142 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.142 Build type: native build 00:03:12.142 Project name: libvfio-user 00:03:12.142 Project version: 0.0.1 00:03:12.142 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.142 C linker for the host machine: cc ld.bfd 2.40-14 00:03:12.142 Host machine cpu family: x86_64 00:03:12.142 Host machine cpu: x86_64 00:03:12.142 Run-time dependency threads found: YES 00:03:12.142 Library dl found: YES 00:03:12.142 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.142 Run-time dependency json-c found: YES 0.17 00:03:12.142 Run-time dependency cmocka found: YES 1.1.7 00:03:12.142 Program pytest-3 found: NO 00:03:12.142 Program flake8 found: NO 00:03:12.142 Program misspell-fixer found: NO 00:03:12.142 Program restructuredtext-lint found: NO 00:03:12.142 Program valgrind found: YES (/usr/bin/valgrind) 00:03:12.142 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.142 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.142 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.142 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.142 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:12.142 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:12.142 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.142 Build targets in project: 8 00:03:12.142 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:12.142 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:12.142 00:03:12.142 libvfio-user 0.0.1 00:03:12.142 00:03:12.142 User defined options 00:03:12.142 buildtype : debug 00:03:12.142 default_library: shared 00:03:12.142 libdir : /usr/local/lib 00:03:12.142 00:03:12.142 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.717 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.717 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:12.717 [2/37] Compiling C object samples/null.p/null.c.o 00:03:12.717 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:12.717 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:12.717 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:12.717 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:12.717 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:12.717 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:12.717 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:12.717 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:12.717 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:12.717 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:12.717 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:12.717 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:12.717 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:12.717 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:12.717 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:12.717 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:12.717 [19/37] Compiling C object samples/server.p/server.c.o 00:03:12.717 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:12.717 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:12.717 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:12.717 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:12.717 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:12.717 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:12.717 [26/37] Compiling C object samples/client.p/client.c.o 00:03:12.978 [27/37] Linking target samples/client 00:03:12.978 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:12.978 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:12.978 [30/37] Linking target test/unit_tests 00:03:12.978 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:13.255 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.255 [33/37] Linking target samples/null 00:03:13.255 [34/37] Linking target samples/server 00:03:13.255 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:13.255 [36/37] Linking target samples/lspci 00:03:13.255 [37/37] Linking target samples/gpio-pci-idio-16 00:03:13.255 INFO: autodetecting backend as ninja 00:03:13.255 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.255 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.516 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.516 ninja: no work to do. 00:03:20.112 The Meson build system 00:03:20.112 Version: 1.5.0 00:03:20.112 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:20.112 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:20.112 Build type: native build 00:03:20.112 Program cat found: YES (/usr/bin/cat) 00:03:20.112 Project name: DPDK 00:03:20.112 Project version: 24.03.0 00:03:20.112 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:20.112 C linker for the host machine: cc ld.bfd 2.40-14 00:03:20.112 Host machine cpu family: x86_64 00:03:20.112 Host machine cpu: x86_64 00:03:20.112 Message: ## Building in Developer Mode ## 00:03:20.112 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:20.112 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:20.112 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:20.112 Program python3 found: YES (/usr/bin/python3) 00:03:20.112 Program cat found: YES (/usr/bin/cat) 00:03:20.112 Compiler for C supports arguments -march=native: YES 00:03:20.112 Checking for size of "void *" : 8 00:03:20.112 Checking for size of "void *" : 8 (cached) 00:03:20.112 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:20.112 Library m found: YES 00:03:20.112 Library numa found: YES 00:03:20.112 Has header "numaif.h" : YES 00:03:20.112 Library fdt found: NO 00:03:20.112 Library execinfo found: NO 00:03:20.112 Has header "execinfo.h" : YES 00:03:20.112 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:20.112 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:20.112 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:20.112 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:20.112 Run-time dependency openssl found: YES 3.1.1 00:03:20.112 Run-time dependency libpcap found: YES 1.10.4 00:03:20.112 Has header "pcap.h" with dependency libpcap: YES 00:03:20.112 Compiler for C supports arguments -Wcast-qual: YES 00:03:20.112 Compiler for C supports arguments -Wdeprecated: YES 00:03:20.112 Compiler for C supports arguments -Wformat: YES 00:03:20.112 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:20.112 Compiler for C supports arguments -Wformat-security: NO 00:03:20.112 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.112 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:20.112 Compiler for C supports arguments -Wnested-externs: YES 00:03:20.112 Compiler for C supports arguments -Wold-style-definition: YES 00:03:20.112 Compiler for C supports arguments -Wpointer-arith: YES 00:03:20.112 Compiler for C supports arguments -Wsign-compare: YES 00:03:20.112 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:20.112 Compiler for C supports arguments -Wundef: YES 00:03:20.112 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.112 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:20.112 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:20.112 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.112 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:20.112 Program objdump found: YES (/usr/bin/objdump) 00:03:20.112 Compiler for C supports arguments -mavx512f: YES 00:03:20.112 Checking if "AVX512 checking" compiles: YES 00:03:20.112 Fetching value of define "__SSE4_2__" : 1 00:03:20.112 Fetching value of define "__AES__" : 1 00:03:20.112 Fetching value of define "__AVX__" : 1 00:03:20.112 Fetching value of define "__AVX2__" : 1 00:03:20.112 Fetching value of define "__AVX512BW__" : 1 00:03:20.112 Fetching value of define "__AVX512CD__" : 1 00:03:20.112 Fetching value of define "__AVX512DQ__" : 1 00:03:20.112 Fetching value of define "__AVX512F__" : 1 00:03:20.112 Fetching value of define "__AVX512VL__" : 1 00:03:20.112 Fetching value of define "__PCLMUL__" : 1 00:03:20.112 Fetching value of define "__RDRND__" : 1 00:03:20.112 Fetching value of define "__RDSEED__" : 1 00:03:20.112 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:20.112 Fetching value of define "__znver1__" : (undefined) 00:03:20.112 Fetching value of define "__znver2__" : (undefined) 00:03:20.112 Fetching value of define "__znver3__" : (undefined) 00:03:20.112 Fetching value of define "__znver4__" : (undefined) 00:03:20.112 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:20.112 Message: lib/log: Defining dependency "log" 00:03:20.112 Message: lib/kvargs: Defining dependency "kvargs" 00:03:20.112 Message: lib/telemetry: Defining dependency "telemetry" 00:03:20.112 Checking for function "getentropy" : NO 00:03:20.112 Message: lib/eal: Defining dependency "eal" 00:03:20.112 Message: lib/ring: Defining dependency "ring" 00:03:20.112 Message: lib/rcu: Defining dependency "rcu" 00:03:20.112 Message: lib/mempool: Defining dependency "mempool" 00:03:20.112 Message: lib/mbuf: Defining dependency "mbuf" 00:03:20.112 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:20.112 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:20.112 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:20.112 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:20.113 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:20.113 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:20.113 Compiler for C supports arguments -mpclmul: YES 00:03:20.113 Compiler for C supports arguments -maes: YES 00:03:20.113 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:20.113 Compiler for C supports arguments -mavx512bw: YES 00:03:20.113 Compiler for C supports arguments -mavx512dq: YES 00:03:20.113 Compiler for C supports arguments -mavx512vl: YES 00:03:20.113 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:20.113 Compiler for C supports arguments -mavx2: YES 00:03:20.113 Compiler for C supports arguments -mavx: YES 00:03:20.113 Message: lib/net: Defining dependency "net" 00:03:20.113 Message: lib/meter: Defining dependency "meter" 00:03:20.113 Message: lib/ethdev: Defining dependency "ethdev" 00:03:20.113 Message: lib/pci: Defining dependency "pci" 00:03:20.113 Message: lib/cmdline: Defining dependency "cmdline" 00:03:20.113 Message: lib/hash: Defining dependency "hash" 00:03:20.113 Message: lib/timer: Defining dependency "timer" 00:03:20.113 Message: lib/compressdev: Defining dependency "compressdev" 00:03:20.113 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:20.113 Message: lib/dmadev: Defining dependency "dmadev" 00:03:20.113 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:20.113 Message: lib/power: Defining dependency "power" 00:03:20.113 Message: lib/reorder: Defining dependency "reorder" 00:03:20.113 Message: lib/security: Defining dependency "security" 00:03:20.113 Has header "linux/userfaultfd.h" : YES 00:03:20.113 Has header "linux/vduse.h" : YES 00:03:20.113 Message: lib/vhost: Defining dependency "vhost" 00:03:20.113 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:20.113 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:20.113 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:20.113 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:20.113 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:20.113 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:20.113 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:20.113 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:20.113 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:20.113 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:20.113 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:20.113 Configuring doxy-api-html.conf using configuration 00:03:20.113 Configuring doxy-api-man.conf using configuration 00:03:20.113 Program mandb found: YES (/usr/bin/mandb) 00:03:20.113 Program sphinx-build found: NO 00:03:20.113 Configuring rte_build_config.h using configuration 00:03:20.113 Message: 00:03:20.113 ================= 00:03:20.113 Applications Enabled 00:03:20.113 ================= 00:03:20.113 00:03:20.113 apps: 00:03:20.113 00:03:20.113 00:03:20.113 Message: 00:03:20.113 ================= 00:03:20.113 Libraries Enabled 00:03:20.113 ================= 00:03:20.113 00:03:20.113 libs: 00:03:20.113 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:20.113 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:20.113 cryptodev, dmadev, power, reorder, security, vhost, 00:03:20.113 00:03:20.113 Message: 00:03:20.113 =============== 00:03:20.113 Drivers Enabled 00:03:20.113 =============== 00:03:20.113 00:03:20.113 common: 00:03:20.113 00:03:20.113 bus: 00:03:20.113 pci, vdev, 00:03:20.113 mempool: 00:03:20.113 ring, 00:03:20.113 dma: 00:03:20.113 00:03:20.113 net: 00:03:20.113 00:03:20.113 crypto: 00:03:20.113 00:03:20.113 compress: 00:03:20.113 00:03:20.113 vdpa: 00:03:20.113 00:03:20.113 00:03:20.113 Message: 00:03:20.113 ================= 00:03:20.113 Content Skipped 00:03:20.113 ================= 00:03:20.113 00:03:20.113 apps: 00:03:20.113 dumpcap: explicitly disabled via build config 00:03:20.113 graph: explicitly disabled via build config 00:03:20.113 pdump: explicitly disabled via build config 00:03:20.113 proc-info: explicitly disabled via build config 00:03:20.113 test-acl: explicitly disabled via build config 00:03:20.113 test-bbdev: explicitly disabled via build config 00:03:20.113 test-cmdline: explicitly disabled via build config 00:03:20.113 test-compress-perf: explicitly disabled via build config 00:03:20.113 test-crypto-perf: explicitly disabled via build config 00:03:20.113 test-dma-perf: explicitly disabled via build config 00:03:20.113 test-eventdev: explicitly disabled via build config 00:03:20.113 test-fib: explicitly disabled via build config 00:03:20.113 test-flow-perf: explicitly disabled via build config 00:03:20.113 test-gpudev: explicitly disabled via build config 00:03:20.113 test-mldev: explicitly disabled via build config 00:03:20.113 test-pipeline: explicitly disabled via build config 00:03:20.113 test-pmd: explicitly disabled via build config 00:03:20.113 test-regex: explicitly disabled via build config 00:03:20.113 test-sad: explicitly disabled via build config 00:03:20.113 test-security-perf: explicitly disabled via build config 00:03:20.113 00:03:20.113 libs: 00:03:20.113 argparse: explicitly disabled via build config 00:03:20.113 metrics: explicitly disabled via build config 00:03:20.113 acl: explicitly disabled via build config 00:03:20.113 bbdev: explicitly disabled via build config 00:03:20.113 bitratestats: explicitly disabled via build config 00:03:20.113 bpf: explicitly disabled via build config 00:03:20.113 cfgfile: explicitly disabled via build config 00:03:20.113 distributor: explicitly disabled via build config 00:03:20.113 efd: explicitly disabled via build config 00:03:20.113 eventdev: explicitly disabled via build config 00:03:20.113 dispatcher: explicitly disabled via build config 00:03:20.113 gpudev: explicitly disabled via build config 00:03:20.113 gro: explicitly disabled via build config 00:03:20.113 gso: explicitly disabled via build config 00:03:20.113 ip_frag: explicitly disabled via build config 00:03:20.113 jobstats: explicitly disabled via build config 00:03:20.113 latencystats: explicitly disabled via build config 00:03:20.113 lpm: explicitly disabled via build config 00:03:20.113 member: explicitly disabled via build config 00:03:20.113 pcapng: explicitly disabled via build config 00:03:20.113 rawdev: explicitly disabled via build config 00:03:20.113 regexdev: explicitly disabled via build config 00:03:20.113 mldev: explicitly disabled via build config 00:03:20.113 rib: explicitly disabled via build config 00:03:20.113 sched: explicitly disabled via build config 00:03:20.113 stack: explicitly disabled via build config 00:03:20.113 ipsec: explicitly disabled via build config 00:03:20.113 pdcp: explicitly disabled via build config 00:03:20.113 fib: explicitly disabled via build config 00:03:20.113 port: explicitly disabled via build config 00:03:20.113 pdump: explicitly disabled via build config 00:03:20.113 table: explicitly disabled via build config 00:03:20.113 pipeline: explicitly disabled via build config 00:03:20.113 graph: explicitly disabled via build config 00:03:20.113 node: explicitly disabled via build config 00:03:20.113 00:03:20.113 drivers: 00:03:20.113 common/cpt: not in enabled drivers build config 00:03:20.113 common/dpaax: not in enabled drivers build config 00:03:20.113 common/iavf: not in enabled drivers build config 00:03:20.113 common/idpf: not in enabled drivers build config 00:03:20.113 common/ionic: not in enabled drivers build config 00:03:20.113 common/mvep: not in enabled drivers build config 00:03:20.113 common/octeontx: not in enabled drivers build config 00:03:20.113 bus/auxiliary: not in enabled drivers build config 00:03:20.113 bus/cdx: not in enabled drivers build config 00:03:20.113 bus/dpaa: not in enabled drivers build config 00:03:20.113 bus/fslmc: not in enabled drivers build config 00:03:20.113 bus/ifpga: not in enabled drivers build config 00:03:20.113 bus/platform: not in enabled drivers build config 00:03:20.113 bus/uacce: not in enabled drivers build config 00:03:20.113 bus/vmbus: not in enabled drivers build config 00:03:20.113 common/cnxk: not in enabled drivers build config 00:03:20.113 common/mlx5: not in enabled drivers build config 00:03:20.113 common/nfp: not in enabled drivers build config 00:03:20.113 common/nitrox: not in enabled drivers build config 00:03:20.113 common/qat: not in enabled drivers build config 00:03:20.113 common/sfc_efx: not in enabled drivers build config 00:03:20.113 mempool/bucket: not in enabled drivers build config 00:03:20.113 mempool/cnxk: not in enabled drivers build config 00:03:20.113 mempool/dpaa: not in enabled drivers build config 00:03:20.113 mempool/dpaa2: not in enabled drivers build config 00:03:20.113 mempool/octeontx: not in enabled drivers build config 00:03:20.113 mempool/stack: not in enabled drivers build config 00:03:20.113 dma/cnxk: not in enabled drivers build config 00:03:20.113 dma/dpaa: not in enabled drivers build config 00:03:20.113 dma/dpaa2: not in enabled drivers build config 00:03:20.113 dma/hisilicon: not in enabled drivers build config 00:03:20.113 dma/idxd: not in enabled drivers build config 00:03:20.113 dma/ioat: not in enabled drivers build config 00:03:20.113 dma/skeleton: not in enabled drivers build config 00:03:20.113 net/af_packet: not in enabled drivers build config 00:03:20.113 net/af_xdp: not in enabled drivers build config 00:03:20.113 net/ark: not in enabled drivers build config 00:03:20.113 net/atlantic: not in enabled drivers build config 00:03:20.113 net/avp: not in enabled drivers build config 00:03:20.113 net/axgbe: not in enabled drivers build config 00:03:20.113 net/bnx2x: not in enabled drivers build config 00:03:20.113 net/bnxt: not in enabled drivers build config 00:03:20.113 net/bonding: not in enabled drivers build config 00:03:20.113 net/cnxk: not in enabled drivers build config 00:03:20.113 net/cpfl: not in enabled drivers build config 00:03:20.113 net/cxgbe: not in enabled drivers build config 00:03:20.113 net/dpaa: not in enabled drivers build config 00:03:20.113 net/dpaa2: not in enabled drivers build config 00:03:20.113 net/e1000: not in enabled drivers build config 00:03:20.113 net/ena: not in enabled drivers build config 00:03:20.113 net/enetc: not in enabled drivers build config 00:03:20.113 net/enetfec: not in enabled drivers build config 00:03:20.114 net/enic: not in enabled drivers build config 00:03:20.114 net/failsafe: not in enabled drivers build config 00:03:20.114 net/fm10k: not in enabled drivers build config 00:03:20.114 net/gve: not in enabled drivers build config 00:03:20.114 net/hinic: not in enabled drivers build config 00:03:20.114 net/hns3: not in enabled drivers build config 00:03:20.114 net/i40e: not in enabled drivers build config 00:03:20.114 net/iavf: not in enabled drivers build config 00:03:20.114 net/ice: not in enabled drivers build config 00:03:20.114 net/idpf: not in enabled drivers build config 00:03:20.114 net/igc: not in enabled drivers build config 00:03:20.114 net/ionic: not in enabled drivers build config 00:03:20.114 net/ipn3ke: not in enabled drivers build config 00:03:20.114 net/ixgbe: not in enabled drivers build config 00:03:20.114 net/mana: not in enabled drivers build config 00:03:20.114 net/memif: not in enabled drivers build config 00:03:20.114 net/mlx4: not in enabled drivers build config 00:03:20.114 net/mlx5: not in enabled drivers build config 00:03:20.114 net/mvneta: not in enabled drivers build config 00:03:20.114 net/mvpp2: not in enabled drivers build config 00:03:20.114 net/netvsc: not in enabled drivers build config 00:03:20.114 net/nfb: not in enabled drivers build config 00:03:20.114 net/nfp: not in enabled drivers build config 00:03:20.114 net/ngbe: not in enabled drivers build config 00:03:20.114 net/null: not in enabled drivers build config 00:03:20.114 net/octeontx: not in enabled drivers build config 00:03:20.114 net/octeon_ep: not in enabled drivers build config 00:03:20.114 net/pcap: not in enabled drivers build config 00:03:20.114 net/pfe: not in enabled drivers build config 00:03:20.114 net/qede: not in enabled drivers build config 00:03:20.114 net/ring: not in enabled drivers build config 00:03:20.114 net/sfc: not in enabled drivers build config 00:03:20.114 net/softnic: not in enabled drivers build config 00:03:20.114 net/tap: not in enabled drivers build config 00:03:20.114 net/thunderx: not in enabled drivers build config 00:03:20.114 net/txgbe: not in enabled drivers build config 00:03:20.114 net/vdev_netvsc: not in enabled drivers build config 00:03:20.114 net/vhost: not in enabled drivers build config 00:03:20.114 net/virtio: not in enabled drivers build config 00:03:20.114 net/vmxnet3: not in enabled drivers build config 00:03:20.114 raw/*: missing internal dependency, "rawdev" 00:03:20.114 crypto/armv8: not in enabled drivers build config 00:03:20.114 crypto/bcmfs: not in enabled drivers build config 00:03:20.114 crypto/caam_jr: not in enabled drivers build config 00:03:20.114 crypto/ccp: not in enabled drivers build config 00:03:20.114 crypto/cnxk: not in enabled drivers build config 00:03:20.114 crypto/dpaa_sec: not in enabled drivers build config 00:03:20.114 crypto/dpaa2_sec: not in enabled drivers build config 00:03:20.114 crypto/ipsec_mb: not in enabled drivers build config 00:03:20.114 crypto/mlx5: not in enabled drivers build config 00:03:20.114 crypto/mvsam: not in enabled drivers build config 00:03:20.114 crypto/nitrox: not in enabled drivers build config 00:03:20.114 crypto/null: not in enabled drivers build config 00:03:20.114 crypto/octeontx: not in enabled drivers build config 00:03:20.114 crypto/openssl: not in enabled drivers build config 00:03:20.114 crypto/scheduler: not in enabled drivers build config 00:03:20.114 crypto/uadk: not in enabled drivers build config 00:03:20.114 crypto/virtio: not in enabled drivers build config 00:03:20.114 compress/isal: not in enabled drivers build config 00:03:20.114 compress/mlx5: not in enabled drivers build config 00:03:20.114 compress/nitrox: not in enabled drivers build config 00:03:20.114 compress/octeontx: not in enabled drivers build config 00:03:20.114 compress/zlib: not in enabled drivers build config 00:03:20.114 regex/*: missing internal dependency, "regexdev" 00:03:20.114 ml/*: missing internal dependency, "mldev" 00:03:20.114 vdpa/ifc: not in enabled drivers build config 00:03:20.114 vdpa/mlx5: not in enabled drivers build config 00:03:20.114 vdpa/nfp: not in enabled drivers build config 00:03:20.114 vdpa/sfc: not in enabled drivers build config 00:03:20.114 event/*: missing internal dependency, "eventdev" 00:03:20.114 baseband/*: missing internal dependency, "bbdev" 00:03:20.114 gpu/*: missing internal dependency, "gpudev" 00:03:20.114 00:03:20.114 00:03:20.114 Build targets in project: 84 00:03:20.114 00:03:20.114 DPDK 24.03.0 00:03:20.114 00:03:20.114 User defined options 00:03:20.114 buildtype : debug 00:03:20.114 default_library : shared 00:03:20.114 libdir : lib 00:03:20.114 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:20.114 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:20.114 c_link_args : 00:03:20.114 cpu_instruction_set: native 00:03:20.114 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:20.114 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:20.114 enable_docs : false 00:03:20.114 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:20.114 enable_kmods : false 00:03:20.114 max_lcores : 128 00:03:20.114 tests : false 00:03:20.114 00:03:20.114 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:20.114 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:20.114 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:20.114 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:20.114 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:20.114 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:20.114 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:20.114 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:20.114 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:20.114 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:20.114 [9/267] Linking static target lib/librte_kvargs.a 00:03:20.114 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:20.114 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:20.114 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:20.114 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:20.114 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:20.114 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:20.114 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:20.114 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:20.114 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:20.373 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:20.373 [20/267] Linking static target lib/librte_log.a 00:03:20.373 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:20.373 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:20.373 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:20.373 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:20.373 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:20.373 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:20.373 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:20.373 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:20.373 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:20.373 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:20.373 [31/267] Linking static target lib/librte_pci.a 00:03:20.373 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:20.373 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:20.373 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:20.373 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:20.373 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.373 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.373 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.632 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.632 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:20.633 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.633 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:20.633 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:20.633 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:20.633 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:20.633 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:20.633 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:20.633 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:20.633 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:20.633 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:20.633 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:20.633 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:20.633 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:20.633 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:20.633 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:20.633 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:20.633 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:20.633 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:20.633 [59/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:20.633 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:20.633 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:20.633 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:20.633 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:20.633 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:20.633 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:20.633 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:20.633 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:20.633 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:20.633 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:20.633 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:20.633 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:20.633 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:20.633 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:20.633 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:20.633 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:20.633 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:20.633 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:20.633 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:20.633 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:20.633 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:20.633 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:20.633 [82/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:20.633 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:20.633 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:20.633 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:20.633 [86/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:20.633 [87/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:20.633 [88/267] Linking static target lib/librte_ring.a 00:03:20.633 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:20.633 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:20.633 [91/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.633 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:20.633 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:20.633 [94/267] Linking static target lib/librte_telemetry.a 00:03:20.633 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:20.633 [96/267] Linking static target lib/librte_meter.a 00:03:20.633 [97/267] Linking static target lib/librte_timer.a 00:03:20.633 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:20.633 [99/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:20.633 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:20.894 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:20.894 [102/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:20.894 [103/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:20.894 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:20.894 [105/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:20.894 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:20.894 [107/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.894 [108/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.894 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:20.894 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:20.894 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:20.894 [112/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.894 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:20.894 [114/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:20.894 [115/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:20.894 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:20.894 [117/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:20.894 [118/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.894 [119/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.894 [120/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.894 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:20.894 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.894 [123/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:20.894 [124/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:20.894 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.894 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:20.894 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:20.894 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:20.894 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:20.894 [130/267] Linking static target lib/librte_cmdline.a 00:03:20.894 [131/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:20.894 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:20.894 [133/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:20.894 [134/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.894 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.894 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:20.894 [137/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.894 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:20.894 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:20.894 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:20.894 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:20.894 [142/267] Linking static target lib/librte_dmadev.a 00:03:20.894 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.894 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:20.894 [145/267] Linking static target lib/librte_power.a 00:03:20.894 [146/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:20.894 [147/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:20.894 [148/267] Linking static target lib/librte_net.a 00:03:20.894 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:20.894 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.894 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.894 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.894 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.894 [154/267] Linking static target lib/librte_mempool.a 00:03:20.894 [155/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.894 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.894 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.894 [158/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.894 [159/267] Linking static target lib/librte_compressdev.a 00:03:20.894 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:20.894 [161/267] Linking static target lib/librte_reorder.a 00:03:20.894 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:20.894 [163/267] Linking static target lib/librte_security.a 00:03:20.894 [164/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.895 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.895 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.895 [167/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:20.895 [168/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.895 [169/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:20.895 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:20.895 [171/267] Linking static target lib/librte_rcu.a 00:03:20.895 [172/267] Linking target lib/librte_log.so.24.1 00:03:20.895 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:20.895 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:20.895 [175/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.895 [176/267] Linking static target lib/librte_eal.a 00:03:20.895 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.895 [178/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.895 [179/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.895 [180/267] Linking static target drivers/librte_bus_vdev.a 00:03:21.156 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.156 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.156 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:21.156 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:21.156 [185/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.156 [186/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:21.156 [187/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.156 [188/267] Linking static target lib/librte_mbuf.a 00:03:21.156 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:21.156 [190/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:21.157 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.157 [192/267] Linking static target lib/librte_hash.a 00:03:21.157 [193/267] Linking target lib/librte_kvargs.so.24.1 00:03:21.157 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.157 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.157 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.157 [197/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:21.157 [198/267] Linking static target drivers/librte_bus_pci.a 00:03:21.157 [199/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.157 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.157 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.157 [202/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:21.157 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:21.418 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:21.418 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.418 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.418 [207/267] Linking static target lib/librte_cryptodev.a 00:03:21.418 [208/267] Linking static target drivers/librte_mempool_ring.a 00:03:21.418 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.418 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.418 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.418 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.418 [213/267] Linking target lib/librte_telemetry.so.24.1 00:03:21.418 [214/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.679 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:21.679 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.679 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.679 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.679 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:21.679 [220/267] Linking static target lib/librte_ethdev.a 00:03:21.941 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.941 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.941 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.941 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.203 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.203 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.776 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.776 [228/267] Linking static target lib/librte_vhost.a 00:03:23.717 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.103 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.680 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.620 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.620 [233/267] Linking target lib/librte_eal.so.24.1 00:03:32.879 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:32.879 [235/267] Linking target lib/librte_ring.so.24.1 00:03:32.879 [236/267] Linking target lib/librte_meter.so.24.1 00:03:32.879 [237/267] Linking target lib/librte_timer.so.24.1 00:03:32.879 [238/267] Linking target lib/librte_pci.so.24.1 00:03:32.879 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:32.879 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:32.879 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:32.879 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:32.879 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:32.879 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:32.879 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:32.879 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:32.879 [247/267] Linking target lib/librte_rcu.so.24.1 00:03:32.879 [248/267] Linking target lib/librte_mempool.so.24.1 00:03:33.138 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:33.138 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:33.138 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:33.138 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:33.397 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:33.397 [254/267] Linking target lib/librte_compressdev.so.24.1 00:03:33.397 [255/267] Linking target lib/librte_net.so.24.1 00:03:33.397 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:33.397 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:33.397 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:33.397 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:33.661 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:33.661 [261/267] Linking target lib/librte_hash.so.24.1 00:03:33.661 [262/267] Linking target lib/librte_security.so.24.1 00:03:33.661 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:33.661 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:33.661 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:33.661 [266/267] Linking target lib/librte_power.so.24.1 00:03:33.661 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:33.661 INFO: autodetecting backend as ninja 00:03:33.661 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:37.967 CC lib/log/log.o 00:03:37.967 CC lib/ut_mock/mock.o 00:03:37.967 CC lib/log/log_flags.o 00:03:37.967 CC lib/ut/ut.o 00:03:37.967 CC lib/log/log_deprecated.o 00:03:37.967 LIB libspdk_ut_mock.a 00:03:37.967 LIB libspdk_log.a 00:03:37.967 LIB libspdk_ut.a 00:03:37.967 SO libspdk_ut_mock.so.6.0 00:03:37.967 SO libspdk_log.so.7.1 00:03:37.967 SO libspdk_ut.so.2.0 00:03:37.967 SYMLINK libspdk_ut_mock.so 00:03:37.967 SYMLINK libspdk_log.so 00:03:37.967 SYMLINK libspdk_ut.so 00:03:37.967 CC lib/dma/dma.o 00:03:37.967 CC lib/util/base64.o 00:03:37.967 CC lib/util/bit_array.o 00:03:37.967 CC lib/util/cpuset.o 00:03:37.967 CC lib/util/crc16.o 00:03:37.967 CC lib/util/crc32.o 00:03:37.967 CXX lib/trace_parser/trace.o 00:03:37.967 CC lib/ioat/ioat.o 00:03:37.967 CC lib/util/crc32c.o 00:03:37.967 CC lib/util/crc32_ieee.o 00:03:37.967 CC lib/util/crc64.o 00:03:37.967 CC lib/util/dif.o 00:03:37.967 CC lib/util/fd.o 00:03:37.967 CC lib/util/fd_group.o 00:03:37.967 CC lib/util/file.o 00:03:37.967 CC lib/util/hexlify.o 00:03:37.967 CC lib/util/iov.o 00:03:37.967 CC lib/util/math.o 00:03:37.967 CC lib/util/net.o 00:03:37.967 CC lib/util/pipe.o 00:03:37.967 CC lib/util/strerror_tls.o 00:03:37.967 CC lib/util/string.o 00:03:37.967 CC lib/util/uuid.o 00:03:37.967 CC lib/util/xor.o 00:03:37.967 CC lib/util/zipf.o 00:03:37.967 CC lib/util/md5.o 00:03:38.228 CC lib/vfio_user/host/vfio_user_pci.o 00:03:38.228 CC lib/vfio_user/host/vfio_user.o 00:03:38.228 LIB libspdk_dma.a 00:03:38.228 SO libspdk_dma.so.5.0 00:03:38.228 LIB libspdk_ioat.a 00:03:38.228 SO libspdk_ioat.so.7.0 00:03:38.228 SYMLINK libspdk_dma.so 00:03:38.488 SYMLINK libspdk_ioat.so 00:03:38.488 LIB libspdk_vfio_user.a 00:03:38.488 SO libspdk_vfio_user.so.5.0 00:03:38.488 LIB libspdk_util.a 00:03:38.488 SYMLINK libspdk_vfio_user.so 00:03:38.488 SO libspdk_util.so.10.1 00:03:38.748 SYMLINK libspdk_util.so 00:03:38.748 LIB libspdk_trace_parser.a 00:03:38.748 SO libspdk_trace_parser.so.6.0 00:03:39.008 SYMLINK libspdk_trace_parser.so 00:03:39.008 CC lib/json/json_parse.o 00:03:39.008 CC lib/json/json_util.o 00:03:39.008 CC lib/json/json_write.o 00:03:39.008 CC lib/vmd/vmd.o 00:03:39.008 CC lib/vmd/led.o 00:03:39.008 CC lib/rdma_utils/rdma_utils.o 00:03:39.008 CC lib/idxd/idxd.o 00:03:39.008 CC lib/idxd/idxd_user.o 00:03:39.008 CC lib/env_dpdk/env.o 00:03:39.008 CC lib/idxd/idxd_kernel.o 00:03:39.008 CC lib/conf/conf.o 00:03:39.008 CC lib/env_dpdk/memory.o 00:03:39.008 CC lib/env_dpdk/pci.o 00:03:39.008 CC lib/env_dpdk/init.o 00:03:39.008 CC lib/env_dpdk/threads.o 00:03:39.008 CC lib/env_dpdk/pci_ioat.o 00:03:39.008 CC lib/env_dpdk/pci_virtio.o 00:03:39.008 CC lib/env_dpdk/pci_vmd.o 00:03:39.008 CC lib/env_dpdk/pci_idxd.o 00:03:39.008 CC lib/env_dpdk/sigbus_handler.o 00:03:39.008 CC lib/env_dpdk/pci_event.o 00:03:39.008 CC lib/env_dpdk/pci_dpdk.o 00:03:39.008 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.008 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.268 LIB libspdk_conf.a 00:03:39.268 SO libspdk_conf.so.6.0 00:03:39.529 LIB libspdk_rdma_utils.a 00:03:39.529 LIB libspdk_json.a 00:03:39.529 SYMLINK libspdk_conf.so 00:03:39.529 SO libspdk_rdma_utils.so.1.0 00:03:39.529 SO libspdk_json.so.6.0 00:03:39.529 SYMLINK libspdk_rdma_utils.so 00:03:39.529 SYMLINK libspdk_json.so 00:03:39.529 LIB libspdk_idxd.a 00:03:39.529 SO libspdk_idxd.so.12.1 00:03:39.789 LIB libspdk_vmd.a 00:03:39.789 SO libspdk_vmd.so.6.0 00:03:39.789 SYMLINK libspdk_idxd.so 00:03:39.789 SYMLINK libspdk_vmd.so 00:03:39.789 CC lib/rdma_provider/common.o 00:03:39.789 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:39.789 CC lib/jsonrpc/jsonrpc_server.o 00:03:39.789 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:39.789 CC lib/jsonrpc/jsonrpc_client.o 00:03:39.789 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:40.149 LIB libspdk_rdma_provider.a 00:03:40.149 SO libspdk_rdma_provider.so.7.0 00:03:40.149 LIB libspdk_jsonrpc.a 00:03:40.149 SO libspdk_jsonrpc.so.6.0 00:03:40.149 SYMLINK libspdk_rdma_provider.so 00:03:40.149 SYMLINK libspdk_jsonrpc.so 00:03:40.410 LIB libspdk_env_dpdk.a 00:03:40.410 SO libspdk_env_dpdk.so.15.1 00:03:40.671 SYMLINK libspdk_env_dpdk.so 00:03:40.671 CC lib/rpc/rpc.o 00:03:40.933 LIB libspdk_rpc.a 00:03:40.933 SO libspdk_rpc.so.6.0 00:03:40.933 SYMLINK libspdk_rpc.so 00:03:41.194 CC lib/trace/trace.o 00:03:41.194 CC lib/trace/trace_flags.o 00:03:41.194 CC lib/trace/trace_rpc.o 00:03:41.194 CC lib/keyring/keyring.o 00:03:41.194 CC lib/notify/notify.o 00:03:41.194 CC lib/keyring/keyring_rpc.o 00:03:41.194 CC lib/notify/notify_rpc.o 00:03:41.454 LIB libspdk_notify.a 00:03:41.455 SO libspdk_notify.so.6.0 00:03:41.455 LIB libspdk_keyring.a 00:03:41.455 LIB libspdk_trace.a 00:03:41.455 SYMLINK libspdk_notify.so 00:03:41.455 SO libspdk_keyring.so.2.0 00:03:41.715 SO libspdk_trace.so.11.0 00:03:41.715 SYMLINK libspdk_keyring.so 00:03:41.715 SYMLINK libspdk_trace.so 00:03:41.976 CC lib/thread/thread.o 00:03:41.976 CC lib/thread/iobuf.o 00:03:41.976 CC lib/sock/sock.o 00:03:41.976 CC lib/sock/sock_rpc.o 00:03:42.548 LIB libspdk_sock.a 00:03:42.548 SO libspdk_sock.so.10.0 00:03:42.548 SYMLINK libspdk_sock.so 00:03:42.810 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.810 CC lib/nvme/nvme_ctrlr.o 00:03:42.810 CC lib/nvme/nvme_fabric.o 00:03:42.810 CC lib/nvme/nvme_ns_cmd.o 00:03:42.810 CC lib/nvme/nvme_ns.o 00:03:42.810 CC lib/nvme/nvme_pcie_common.o 00:03:42.810 CC lib/nvme/nvme_pcie.o 00:03:42.810 CC lib/nvme/nvme_qpair.o 00:03:42.810 CC lib/nvme/nvme.o 00:03:42.810 CC lib/nvme/nvme_quirks.o 00:03:42.810 CC lib/nvme/nvme_transport.o 00:03:42.810 CC lib/nvme/nvme_discovery.o 00:03:42.810 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.810 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.810 CC lib/nvme/nvme_tcp.o 00:03:42.810 CC lib/nvme/nvme_opal.o 00:03:42.810 CC lib/nvme/nvme_io_msg.o 00:03:42.810 CC lib/nvme/nvme_poll_group.o 00:03:42.810 CC lib/nvme/nvme_zns.o 00:03:42.810 CC lib/nvme/nvme_stubs.o 00:03:42.810 CC lib/nvme/nvme_auth.o 00:03:42.810 CC lib/nvme/nvme_cuse.o 00:03:42.810 CC lib/nvme/nvme_vfio_user.o 00:03:42.810 CC lib/nvme/nvme_rdma.o 00:03:43.383 LIB libspdk_thread.a 00:03:43.383 SO libspdk_thread.so.11.0 00:03:43.383 SYMLINK libspdk_thread.so 00:03:43.644 CC lib/blob/blobstore.o 00:03:43.644 CC lib/blob/request.o 00:03:43.644 CC lib/blob/zeroes.o 00:03:43.644 CC lib/blob/blob_bs_dev.o 00:03:43.644 CC lib/fsdev/fsdev.o 00:03:43.644 CC lib/fsdev/fsdev_io.o 00:03:43.644 CC lib/fsdev/fsdev_rpc.o 00:03:43.644 CC lib/vfu_tgt/tgt_endpoint.o 00:03:43.644 CC lib/vfu_tgt/tgt_rpc.o 00:03:43.644 CC lib/init/json_config.o 00:03:43.644 CC lib/virtio/virtio.o 00:03:43.644 CC lib/init/subsystem.o 00:03:43.644 CC lib/accel/accel.o 00:03:43.644 CC lib/virtio/virtio_vhost_user.o 00:03:43.644 CC lib/init/subsystem_rpc.o 00:03:43.644 CC lib/accel/accel_rpc.o 00:03:43.644 CC lib/virtio/virtio_vfio_user.o 00:03:43.644 CC lib/init/rpc.o 00:03:43.644 CC lib/virtio/virtio_pci.o 00:03:43.644 CC lib/accel/accel_sw.o 00:03:44.218 LIB libspdk_init.a 00:03:44.218 SO libspdk_init.so.6.0 00:03:44.218 LIB libspdk_vfu_tgt.a 00:03:44.218 LIB libspdk_virtio.a 00:03:44.218 SO libspdk_vfu_tgt.so.3.0 00:03:44.218 SO libspdk_virtio.so.7.0 00:03:44.218 SYMLINK libspdk_init.so 00:03:44.218 SYMLINK libspdk_vfu_tgt.so 00:03:44.218 SYMLINK libspdk_virtio.so 00:03:44.484 LIB libspdk_fsdev.a 00:03:44.484 SO libspdk_fsdev.so.2.0 00:03:44.484 SYMLINK libspdk_fsdev.so 00:03:44.484 CC lib/event/app.o 00:03:44.484 CC lib/event/reactor.o 00:03:44.484 CC lib/event/app_rpc.o 00:03:44.484 CC lib/event/log_rpc.o 00:03:44.484 CC lib/event/scheduler_static.o 00:03:44.749 LIB libspdk_accel.a 00:03:44.749 LIB libspdk_nvme.a 00:03:44.749 SO libspdk_accel.so.16.0 00:03:44.749 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:44.749 SYMLINK libspdk_accel.so 00:03:45.011 SO libspdk_nvme.so.15.0 00:03:45.011 LIB libspdk_event.a 00:03:45.011 SO libspdk_event.so.14.0 00:03:45.011 SYMLINK libspdk_event.so 00:03:45.011 SYMLINK libspdk_nvme.so 00:03:45.271 CC lib/bdev/bdev.o 00:03:45.271 CC lib/bdev/bdev_rpc.o 00:03:45.271 CC lib/bdev/bdev_zone.o 00:03:45.271 CC lib/bdev/part.o 00:03:45.272 CC lib/bdev/scsi_nvme.o 00:03:45.532 LIB libspdk_fuse_dispatcher.a 00:03:45.532 SO libspdk_fuse_dispatcher.so.1.0 00:03:45.532 SYMLINK libspdk_fuse_dispatcher.so 00:03:46.475 LIB libspdk_blob.a 00:03:46.475 SO libspdk_blob.so.11.0 00:03:46.475 SYMLINK libspdk_blob.so 00:03:46.737 CC lib/blobfs/blobfs.o 00:03:46.737 CC lib/blobfs/tree.o 00:03:46.737 CC lib/lvol/lvol.o 00:03:47.682 LIB libspdk_bdev.a 00:03:47.682 LIB libspdk_blobfs.a 00:03:47.682 SO libspdk_bdev.so.17.0 00:03:47.682 SO libspdk_blobfs.so.10.0 00:03:47.682 LIB libspdk_lvol.a 00:03:47.682 SO libspdk_lvol.so.10.0 00:03:47.682 SYMLINK libspdk_blobfs.so 00:03:47.682 SYMLINK libspdk_bdev.so 00:03:47.682 SYMLINK libspdk_lvol.so 00:03:48.254 CC lib/nbd/nbd.o 00:03:48.254 CC lib/nbd/nbd_rpc.o 00:03:48.254 CC lib/nvmf/ctrlr.o 00:03:48.254 CC lib/nvmf/ctrlr_discovery.o 00:03:48.254 CC lib/ftl/ftl_core.o 00:03:48.254 CC lib/ublk/ublk.o 00:03:48.254 CC lib/ftl/ftl_init.o 00:03:48.254 CC lib/nvmf/ctrlr_bdev.o 00:03:48.254 CC lib/ublk/ublk_rpc.o 00:03:48.254 CC lib/nvmf/subsystem.o 00:03:48.254 CC lib/ftl/ftl_layout.o 00:03:48.254 CC lib/nvmf/nvmf.o 00:03:48.254 CC lib/ftl/ftl_debug.o 00:03:48.254 CC lib/nvmf/nvmf_rpc.o 00:03:48.254 CC lib/ftl/ftl_sb.o 00:03:48.254 CC lib/ftl/ftl_io.o 00:03:48.254 CC lib/nvmf/transport.o 00:03:48.254 CC lib/scsi/dev.o 00:03:48.254 CC lib/ftl/ftl_l2p.o 00:03:48.254 CC lib/nvmf/tcp.o 00:03:48.254 CC lib/scsi/lun.o 00:03:48.254 CC lib/nvmf/stubs.o 00:03:48.254 CC lib/ftl/ftl_l2p_flat.o 00:03:48.254 CC lib/scsi/port.o 00:03:48.254 CC lib/nvmf/mdns_server.o 00:03:48.254 CC lib/ftl/ftl_nv_cache.o 00:03:48.254 CC lib/scsi/scsi.o 00:03:48.254 CC lib/nvmf/vfio_user.o 00:03:48.254 CC lib/ftl/ftl_band.o 00:03:48.254 CC lib/nvmf/rdma.o 00:03:48.254 CC lib/scsi/scsi_bdev.o 00:03:48.254 CC lib/nvmf/auth.o 00:03:48.254 CC lib/ftl/ftl_band_ops.o 00:03:48.254 CC lib/scsi/scsi_pr.o 00:03:48.254 CC lib/scsi/scsi_rpc.o 00:03:48.254 CC lib/ftl/ftl_rq.o 00:03:48.254 CC lib/scsi/task.o 00:03:48.254 CC lib/ftl/ftl_writer.o 00:03:48.254 CC lib/ftl/ftl_reloc.o 00:03:48.254 CC lib/ftl/ftl_l2p_cache.o 00:03:48.254 CC lib/ftl/ftl_p2l.o 00:03:48.254 CC lib/ftl/ftl_p2l_log.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:48.254 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:48.254 CC lib/ftl/utils/ftl_conf.o 00:03:48.254 CC lib/ftl/utils/ftl_md.o 00:03:48.254 CC lib/ftl/utils/ftl_mempool.o 00:03:48.254 CC lib/ftl/utils/ftl_bitmap.o 00:03:48.254 CC lib/ftl/utils/ftl_property.o 00:03:48.254 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:48.254 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:48.254 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:48.254 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:48.254 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:48.254 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:48.254 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:48.254 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:48.254 CC lib/ftl/base/ftl_base_dev.o 00:03:48.254 CC lib/ftl/base/ftl_base_bdev.o 00:03:48.254 CC lib/ftl/ftl_trace.o 00:03:48.515 LIB libspdk_nbd.a 00:03:48.776 SO libspdk_nbd.so.7.0 00:03:48.776 LIB libspdk_scsi.a 00:03:48.776 SYMLINK libspdk_nbd.so 00:03:48.776 SO libspdk_scsi.so.9.0 00:03:48.776 LIB libspdk_ublk.a 00:03:48.776 SYMLINK libspdk_scsi.so 00:03:48.776 SO libspdk_ublk.so.3.0 00:03:49.037 SYMLINK libspdk_ublk.so 00:03:49.299 CC lib/vhost/vhost.o 00:03:49.299 CC lib/vhost/vhost_rpc.o 00:03:49.299 CC lib/vhost/vhost_scsi.o 00:03:49.299 CC lib/vhost/vhost_blk.o 00:03:49.299 CC lib/vhost/rte_vhost_user.o 00:03:49.299 CC lib/iscsi/conn.o 00:03:49.299 CC lib/iscsi/init_grp.o 00:03:49.299 CC lib/iscsi/param.o 00:03:49.299 CC lib/iscsi/iscsi.o 00:03:49.299 CC lib/iscsi/portal_grp.o 00:03:49.299 CC lib/iscsi/tgt_node.o 00:03:49.299 LIB libspdk_ftl.a 00:03:49.299 CC lib/iscsi/iscsi_subsystem.o 00:03:49.299 CC lib/iscsi/iscsi_rpc.o 00:03:49.299 CC lib/iscsi/task.o 00:03:49.561 SO libspdk_ftl.so.9.0 00:03:49.822 SYMLINK libspdk_ftl.so 00:03:50.082 LIB libspdk_nvmf.a 00:03:50.082 SO libspdk_nvmf.so.20.0 00:03:50.082 LIB libspdk_vhost.a 00:03:50.344 SO libspdk_vhost.so.8.0 00:03:50.344 SYMLINK libspdk_vhost.so 00:03:50.344 SYMLINK libspdk_nvmf.so 00:03:50.605 LIB libspdk_iscsi.a 00:03:50.605 SO libspdk_iscsi.so.8.0 00:03:50.605 SYMLINK libspdk_iscsi.so 00:03:51.177 CC module/vfu_device/vfu_virtio.o 00:03:51.177 CC module/vfu_device/vfu_virtio_blk.o 00:03:51.177 CC module/env_dpdk/env_dpdk_rpc.o 00:03:51.177 CC module/vfu_device/vfu_virtio_scsi.o 00:03:51.177 CC module/vfu_device/vfu_virtio_rpc.o 00:03:51.177 CC module/vfu_device/vfu_virtio_fs.o 00:03:51.437 LIB libspdk_env_dpdk_rpc.a 00:03:51.437 CC module/sock/posix/posix.o 00:03:51.437 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:51.437 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:51.437 CC module/accel/iaa/accel_iaa_rpc.o 00:03:51.437 CC module/accel/iaa/accel_iaa.o 00:03:51.437 CC module/keyring/file/keyring_rpc.o 00:03:51.437 CC module/keyring/file/keyring.o 00:03:51.437 CC module/accel/error/accel_error.o 00:03:51.437 CC module/accel/dsa/accel_dsa.o 00:03:51.437 CC module/accel/dsa/accel_dsa_rpc.o 00:03:51.437 CC module/accel/error/accel_error_rpc.o 00:03:51.437 CC module/fsdev/aio/fsdev_aio.o 00:03:51.437 CC module/blob/bdev/blob_bdev.o 00:03:51.437 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:51.437 CC module/scheduler/gscheduler/gscheduler.o 00:03:51.437 CC module/keyring/linux/keyring.o 00:03:51.437 CC module/fsdev/aio/linux_aio_mgr.o 00:03:51.437 CC module/accel/ioat/accel_ioat.o 00:03:51.437 CC module/keyring/linux/keyring_rpc.o 00:03:51.437 CC module/accel/ioat/accel_ioat_rpc.o 00:03:51.437 SO libspdk_env_dpdk_rpc.so.6.0 00:03:51.437 SYMLINK libspdk_env_dpdk_rpc.so 00:03:51.697 LIB libspdk_keyring_file.a 00:03:51.697 LIB libspdk_keyring_linux.a 00:03:51.697 LIB libspdk_scheduler_dpdk_governor.a 00:03:51.697 LIB libspdk_scheduler_gscheduler.a 00:03:51.697 LIB libspdk_accel_error.a 00:03:51.697 LIB libspdk_accel_ioat.a 00:03:51.697 SO libspdk_keyring_file.so.2.0 00:03:51.697 SO libspdk_keyring_linux.so.1.0 00:03:51.697 LIB libspdk_scheduler_dynamic.a 00:03:51.697 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:51.697 SO libspdk_scheduler_gscheduler.so.4.0 00:03:51.697 LIB libspdk_accel_iaa.a 00:03:51.697 SO libspdk_accel_error.so.2.0 00:03:51.697 SO libspdk_accel_ioat.so.6.0 00:03:51.697 SO libspdk_scheduler_dynamic.so.4.0 00:03:51.697 SYMLINK libspdk_keyring_file.so 00:03:51.697 LIB libspdk_blob_bdev.a 00:03:51.697 LIB libspdk_accel_dsa.a 00:03:51.697 SO libspdk_accel_iaa.so.3.0 00:03:51.697 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:51.697 SYMLINK libspdk_keyring_linux.so 00:03:51.697 SYMLINK libspdk_scheduler_gscheduler.so 00:03:51.697 SO libspdk_blob_bdev.so.11.0 00:03:51.697 SO libspdk_accel_dsa.so.5.0 00:03:51.697 SYMLINK libspdk_accel_error.so 00:03:51.957 SYMLINK libspdk_accel_ioat.so 00:03:51.957 SYMLINK libspdk_scheduler_dynamic.so 00:03:51.957 SYMLINK libspdk_accel_iaa.so 00:03:51.957 LIB libspdk_vfu_device.a 00:03:51.957 SYMLINK libspdk_blob_bdev.so 00:03:51.957 SYMLINK libspdk_accel_dsa.so 00:03:51.957 SO libspdk_vfu_device.so.3.0 00:03:51.957 SYMLINK libspdk_vfu_device.so 00:03:52.226 LIB libspdk_fsdev_aio.a 00:03:52.226 LIB libspdk_sock_posix.a 00:03:52.226 SO libspdk_fsdev_aio.so.1.0 00:03:52.226 SO libspdk_sock_posix.so.6.0 00:03:52.226 SYMLINK libspdk_fsdev_aio.so 00:03:52.226 SYMLINK libspdk_sock_posix.so 00:03:52.486 CC module/blobfs/bdev/blobfs_bdev.o 00:03:52.486 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:52.486 CC module/bdev/gpt/gpt.o 00:03:52.486 CC module/bdev/null/bdev_null_rpc.o 00:03:52.486 CC module/bdev/null/bdev_null.o 00:03:52.486 CC module/bdev/gpt/vbdev_gpt.o 00:03:52.486 CC module/bdev/error/vbdev_error.o 00:03:52.486 CC module/bdev/aio/bdev_aio.o 00:03:52.486 CC module/bdev/raid/bdev_raid.o 00:03:52.486 CC module/bdev/raid/bdev_raid_rpc.o 00:03:52.486 CC module/bdev/aio/bdev_aio_rpc.o 00:03:52.486 CC module/bdev/lvol/vbdev_lvol.o 00:03:52.486 CC module/bdev/split/vbdev_split.o 00:03:52.486 CC module/bdev/passthru/vbdev_passthru.o 00:03:52.486 CC module/bdev/malloc/bdev_malloc.o 00:03:52.486 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:52.486 CC module/bdev/error/vbdev_error_rpc.o 00:03:52.486 CC module/bdev/raid/bdev_raid_sb.o 00:03:52.486 CC module/bdev/split/vbdev_split_rpc.o 00:03:52.486 CC module/bdev/delay/vbdev_delay.o 00:03:52.486 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:52.486 CC module/bdev/ftl/bdev_ftl.o 00:03:52.486 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:52.486 CC module/bdev/raid/raid0.o 00:03:52.486 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:52.486 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:52.486 CC module/bdev/raid/raid1.o 00:03:52.486 CC module/bdev/raid/concat.o 00:03:52.486 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:52.486 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:52.486 CC module/bdev/iscsi/bdev_iscsi.o 00:03:52.486 CC module/bdev/nvme/bdev_nvme.o 00:03:52.486 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:52.486 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:52.486 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:52.486 CC module/bdev/nvme/nvme_rpc.o 00:03:52.486 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:52.486 CC module/bdev/nvme/bdev_mdns_client.o 00:03:52.486 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:52.486 CC module/bdev/nvme/vbdev_opal.o 00:03:52.486 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:52.486 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:52.747 LIB libspdk_blobfs_bdev.a 00:03:52.747 SO libspdk_blobfs_bdev.so.6.0 00:03:52.747 LIB libspdk_bdev_null.a 00:03:52.747 LIB libspdk_bdev_gpt.a 00:03:52.747 LIB libspdk_bdev_error.a 00:03:52.747 LIB libspdk_bdev_split.a 00:03:52.747 SYMLINK libspdk_blobfs_bdev.so 00:03:52.747 SO libspdk_bdev_gpt.so.6.0 00:03:52.747 SO libspdk_bdev_null.so.6.0 00:03:52.747 SO libspdk_bdev_split.so.6.0 00:03:52.747 SO libspdk_bdev_error.so.6.0 00:03:53.007 LIB libspdk_bdev_passthru.a 00:03:53.007 LIB libspdk_bdev_aio.a 00:03:53.007 LIB libspdk_bdev_ftl.a 00:03:53.007 SO libspdk_bdev_passthru.so.6.0 00:03:53.007 SO libspdk_bdev_aio.so.6.0 00:03:53.007 SYMLINK libspdk_bdev_gpt.so 00:03:53.007 SO libspdk_bdev_ftl.so.6.0 00:03:53.007 SYMLINK libspdk_bdev_split.so 00:03:53.007 SYMLINK libspdk_bdev_null.so 00:03:53.007 SYMLINK libspdk_bdev_error.so 00:03:53.007 LIB libspdk_bdev_malloc.a 00:03:53.007 LIB libspdk_bdev_zone_block.a 00:03:53.007 LIB libspdk_bdev_delay.a 00:03:53.007 LIB libspdk_bdev_iscsi.a 00:03:53.007 SO libspdk_bdev_malloc.so.6.0 00:03:53.007 SO libspdk_bdev_zone_block.so.6.0 00:03:53.007 SYMLINK libspdk_bdev_aio.so 00:03:53.007 SYMLINK libspdk_bdev_passthru.so 00:03:53.007 SO libspdk_bdev_iscsi.so.6.0 00:03:53.007 SO libspdk_bdev_delay.so.6.0 00:03:53.007 SYMLINK libspdk_bdev_ftl.so 00:03:53.007 SYMLINK libspdk_bdev_malloc.so 00:03:53.007 SYMLINK libspdk_bdev_zone_block.so 00:03:53.007 SYMLINK libspdk_bdev_iscsi.so 00:03:53.007 SYMLINK libspdk_bdev_delay.so 00:03:53.007 LIB libspdk_bdev_lvol.a 00:03:53.007 LIB libspdk_bdev_virtio.a 00:03:53.268 SO libspdk_bdev_lvol.so.6.0 00:03:53.268 SO libspdk_bdev_virtio.so.6.0 00:03:53.268 SYMLINK libspdk_bdev_lvol.so 00:03:53.268 SYMLINK libspdk_bdev_virtio.so 00:03:53.528 LIB libspdk_bdev_raid.a 00:03:53.528 SO libspdk_bdev_raid.so.6.0 00:03:53.528 SYMLINK libspdk_bdev_raid.so 00:03:54.912 LIB libspdk_bdev_nvme.a 00:03:54.912 SO libspdk_bdev_nvme.so.7.1 00:03:54.912 SYMLINK libspdk_bdev_nvme.so 00:03:55.856 CC module/event/subsystems/iobuf/iobuf.o 00:03:55.856 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:55.856 CC module/event/subsystems/keyring/keyring.o 00:03:55.856 CC module/event/subsystems/vmd/vmd.o 00:03:55.856 CC module/event/subsystems/scheduler/scheduler.o 00:03:55.856 CC module/event/subsystems/sock/sock.o 00:03:55.856 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:55.856 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:55.856 CC module/event/subsystems/fsdev/fsdev.o 00:03:55.856 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:55.856 LIB libspdk_event_keyring.a 00:03:55.856 LIB libspdk_event_scheduler.a 00:03:55.856 LIB libspdk_event_vmd.a 00:03:55.856 LIB libspdk_event_iobuf.a 00:03:55.856 LIB libspdk_event_vhost_blk.a 00:03:55.856 LIB libspdk_event_sock.a 00:03:55.856 LIB libspdk_event_fsdev.a 00:03:55.856 LIB libspdk_event_vfu_tgt.a 00:03:55.856 SO libspdk_event_keyring.so.1.0 00:03:55.856 SO libspdk_event_vhost_blk.so.3.0 00:03:55.856 SO libspdk_event_scheduler.so.4.0 00:03:55.856 SO libspdk_event_iobuf.so.3.0 00:03:56.117 SO libspdk_event_vmd.so.6.0 00:03:56.117 SO libspdk_event_sock.so.5.0 00:03:56.117 SO libspdk_event_fsdev.so.1.0 00:03:56.117 SO libspdk_event_vfu_tgt.so.3.0 00:03:56.117 SYMLINK libspdk_event_keyring.so 00:03:56.117 SYMLINK libspdk_event_sock.so 00:03:56.117 SYMLINK libspdk_event_fsdev.so 00:03:56.117 SYMLINK libspdk_event_iobuf.so 00:03:56.117 SYMLINK libspdk_event_vhost_blk.so 00:03:56.117 SYMLINK libspdk_event_scheduler.so 00:03:56.117 SYMLINK libspdk_event_vfu_tgt.so 00:03:56.117 SYMLINK libspdk_event_vmd.so 00:03:56.378 CC module/event/subsystems/accel/accel.o 00:03:56.639 LIB libspdk_event_accel.a 00:03:56.639 SO libspdk_event_accel.so.6.0 00:03:56.639 SYMLINK libspdk_event_accel.so 00:03:56.900 CC module/event/subsystems/bdev/bdev.o 00:03:57.161 LIB libspdk_event_bdev.a 00:03:57.161 SO libspdk_event_bdev.so.6.0 00:03:57.422 SYMLINK libspdk_event_bdev.so 00:03:57.683 CC module/event/subsystems/scsi/scsi.o 00:03:57.683 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:57.683 CC module/event/subsystems/ublk/ublk.o 00:03:57.683 CC module/event/subsystems/nbd/nbd.o 00:03:57.683 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:57.945 LIB libspdk_event_nbd.a 00:03:57.945 LIB libspdk_event_ublk.a 00:03:57.945 SO libspdk_event_nbd.so.6.0 00:03:57.945 LIB libspdk_event_scsi.a 00:03:57.945 SO libspdk_event_ublk.so.3.0 00:03:57.945 SO libspdk_event_scsi.so.6.0 00:03:57.945 SYMLINK libspdk_event_nbd.so 00:03:57.945 LIB libspdk_event_nvmf.a 00:03:57.945 SYMLINK libspdk_event_ublk.so 00:03:57.945 SO libspdk_event_nvmf.so.6.0 00:03:57.945 SYMLINK libspdk_event_scsi.so 00:03:57.945 SYMLINK libspdk_event_nvmf.so 00:03:58.206 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.469 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.469 LIB libspdk_event_vhost_scsi.a 00:03:58.469 LIB libspdk_event_iscsi.a 00:03:58.469 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.469 SO libspdk_event_iscsi.so.6.0 00:03:58.730 SYMLINK libspdk_event_vhost_scsi.so 00:03:58.730 SYMLINK libspdk_event_iscsi.so 00:03:58.730 SO libspdk.so.6.0 00:03:58.730 SYMLINK libspdk.so 00:03:59.304 CC app/trace_record/trace_record.o 00:03:59.304 CXX app/trace/trace.o 00:03:59.304 CC app/spdk_nvme_perf/perf.o 00:03:59.304 CC app/spdk_lspci/spdk_lspci.o 00:03:59.304 CC app/spdk_nvme_identify/identify.o 00:03:59.304 TEST_HEADER include/spdk/accel.h 00:03:59.304 CC app/spdk_top/spdk_top.o 00:03:59.304 TEST_HEADER include/spdk/accel_module.h 00:03:59.304 CC test/rpc_client/rpc_client_test.o 00:03:59.304 TEST_HEADER include/spdk/assert.h 00:03:59.304 TEST_HEADER include/spdk/barrier.h 00:03:59.304 TEST_HEADER include/spdk/base64.h 00:03:59.304 TEST_HEADER include/spdk/bdev_module.h 00:03:59.304 TEST_HEADER include/spdk/bdev.h 00:03:59.304 TEST_HEADER include/spdk/bdev_zone.h 00:03:59.304 TEST_HEADER include/spdk/bit_array.h 00:03:59.304 TEST_HEADER include/spdk/bit_pool.h 00:03:59.304 TEST_HEADER include/spdk/blob_bdev.h 00:03:59.304 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.304 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:59.304 TEST_HEADER include/spdk/blobfs.h 00:03:59.304 TEST_HEADER include/spdk/blob.h 00:03:59.304 TEST_HEADER include/spdk/conf.h 00:03:59.304 TEST_HEADER include/spdk/config.h 00:03:59.304 TEST_HEADER include/spdk/crc16.h 00:03:59.304 TEST_HEADER include/spdk/cpuset.h 00:03:59.304 TEST_HEADER include/spdk/crc32.h 00:03:59.304 TEST_HEADER include/spdk/crc64.h 00:03:59.304 TEST_HEADER include/spdk/dif.h 00:03:59.304 TEST_HEADER include/spdk/dma.h 00:03:59.304 TEST_HEADER include/spdk/endian.h 00:03:59.304 TEST_HEADER include/spdk/env_dpdk.h 00:03:59.304 TEST_HEADER include/spdk/env.h 00:03:59.304 TEST_HEADER include/spdk/fd_group.h 00:03:59.304 TEST_HEADER include/spdk/fd.h 00:03:59.304 TEST_HEADER include/spdk/event.h 00:03:59.304 TEST_HEADER include/spdk/file.h 00:03:59.304 TEST_HEADER include/spdk/fsdev.h 00:03:59.304 TEST_HEADER include/spdk/fsdev_module.h 00:03:59.304 CC app/iscsi_tgt/iscsi_tgt.o 00:03:59.304 TEST_HEADER include/spdk/ftl.h 00:03:59.304 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:59.304 TEST_HEADER include/spdk/histogram_data.h 00:03:59.304 TEST_HEADER include/spdk/gpt_spec.h 00:03:59.304 TEST_HEADER include/spdk/hexlify.h 00:03:59.304 TEST_HEADER include/spdk/idxd.h 00:03:59.304 TEST_HEADER include/spdk/idxd_spec.h 00:03:59.304 CC app/nvmf_tgt/nvmf_main.o 00:03:59.304 TEST_HEADER include/spdk/init.h 00:03:59.304 TEST_HEADER include/spdk/ioat.h 00:03:59.304 TEST_HEADER include/spdk/ioat_spec.h 00:03:59.304 TEST_HEADER include/spdk/iscsi_spec.h 00:03:59.304 TEST_HEADER include/spdk/json.h 00:03:59.304 CC app/spdk_dd/spdk_dd.o 00:03:59.304 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:59.304 TEST_HEADER include/spdk/keyring.h 00:03:59.304 TEST_HEADER include/spdk/jsonrpc.h 00:03:59.304 TEST_HEADER include/spdk/keyring_module.h 00:03:59.304 TEST_HEADER include/spdk/log.h 00:03:59.304 TEST_HEADER include/spdk/likely.h 00:03:59.304 TEST_HEADER include/spdk/lvol.h 00:03:59.304 TEST_HEADER include/spdk/md5.h 00:03:59.304 TEST_HEADER include/spdk/memory.h 00:03:59.304 TEST_HEADER include/spdk/mmio.h 00:03:59.304 TEST_HEADER include/spdk/nbd.h 00:03:59.304 TEST_HEADER include/spdk/net.h 00:03:59.304 TEST_HEADER include/spdk/notify.h 00:03:59.304 TEST_HEADER include/spdk/nvme.h 00:03:59.304 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.304 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.304 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.304 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.304 CC app/spdk_tgt/spdk_tgt.o 00:03:59.304 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.304 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.304 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.304 TEST_HEADER include/spdk/nvmf.h 00:03:59.304 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.304 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.304 TEST_HEADER include/spdk/opal_spec.h 00:03:59.304 TEST_HEADER include/spdk/opal.h 00:03:59.304 TEST_HEADER include/spdk/pci_ids.h 00:03:59.304 TEST_HEADER include/spdk/pipe.h 00:03:59.304 TEST_HEADER include/spdk/queue.h 00:03:59.304 TEST_HEADER include/spdk/reduce.h 00:03:59.304 TEST_HEADER include/spdk/rpc.h 00:03:59.304 TEST_HEADER include/spdk/scheduler.h 00:03:59.304 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.304 TEST_HEADER include/spdk/scsi.h 00:03:59.304 TEST_HEADER include/spdk/sock.h 00:03:59.304 TEST_HEADER include/spdk/stdinc.h 00:03:59.304 TEST_HEADER include/spdk/string.h 00:03:59.304 TEST_HEADER include/spdk/thread.h 00:03:59.304 TEST_HEADER include/spdk/trace.h 00:03:59.304 TEST_HEADER include/spdk/trace_parser.h 00:03:59.304 TEST_HEADER include/spdk/tree.h 00:03:59.304 TEST_HEADER include/spdk/ublk.h 00:03:59.304 TEST_HEADER include/spdk/util.h 00:03:59.304 TEST_HEADER include/spdk/uuid.h 00:03:59.304 TEST_HEADER include/spdk/version.h 00:03:59.304 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.304 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.304 TEST_HEADER include/spdk/vhost.h 00:03:59.304 TEST_HEADER include/spdk/vmd.h 00:03:59.304 TEST_HEADER include/spdk/xor.h 00:03:59.304 TEST_HEADER include/spdk/zipf.h 00:03:59.304 CXX test/cpp_headers/accel.o 00:03:59.304 CXX test/cpp_headers/accel_module.o 00:03:59.304 CXX test/cpp_headers/assert.o 00:03:59.304 CXX test/cpp_headers/barrier.o 00:03:59.304 CXX test/cpp_headers/base64.o 00:03:59.304 CXX test/cpp_headers/bdev.o 00:03:59.304 CXX test/cpp_headers/bdev_module.o 00:03:59.304 CXX test/cpp_headers/bdev_zone.o 00:03:59.304 CXX test/cpp_headers/bit_array.o 00:03:59.304 CXX test/cpp_headers/blobfs_bdev.o 00:03:59.304 CXX test/cpp_headers/bit_pool.o 00:03:59.304 CXX test/cpp_headers/blob_bdev.o 00:03:59.304 CXX test/cpp_headers/blobfs.o 00:03:59.304 CXX test/cpp_headers/blob.o 00:03:59.304 CXX test/cpp_headers/conf.o 00:03:59.304 CXX test/cpp_headers/cpuset.o 00:03:59.304 CXX test/cpp_headers/config.o 00:03:59.304 CXX test/cpp_headers/crc32.o 00:03:59.304 CXX test/cpp_headers/crc16.o 00:03:59.304 CXX test/cpp_headers/crc64.o 00:03:59.304 CXX test/cpp_headers/dif.o 00:03:59.304 CXX test/cpp_headers/dma.o 00:03:59.304 CXX test/cpp_headers/env_dpdk.o 00:03:59.304 CXX test/cpp_headers/endian.o 00:03:59.304 CXX test/cpp_headers/env.o 00:03:59.304 CXX test/cpp_headers/fd.o 00:03:59.565 CXX test/cpp_headers/event.o 00:03:59.565 CXX test/cpp_headers/fd_group.o 00:03:59.565 CXX test/cpp_headers/fsdev.o 00:03:59.565 CXX test/cpp_headers/file.o 00:03:59.565 CXX test/cpp_headers/fsdev_module.o 00:03:59.565 CXX test/cpp_headers/fuse_dispatcher.o 00:03:59.565 CXX test/cpp_headers/ftl.o 00:03:59.565 CXX test/cpp_headers/gpt_spec.o 00:03:59.565 CXX test/cpp_headers/histogram_data.o 00:03:59.565 CXX test/cpp_headers/hexlify.o 00:03:59.565 CXX test/cpp_headers/idxd.o 00:03:59.565 CXX test/cpp_headers/idxd_spec.o 00:03:59.565 CXX test/cpp_headers/init.o 00:03:59.565 CXX test/cpp_headers/ioat.o 00:03:59.565 CXX test/cpp_headers/iscsi_spec.o 00:03:59.565 CXX test/cpp_headers/ioat_spec.o 00:03:59.565 CXX test/cpp_headers/jsonrpc.o 00:03:59.565 CXX test/cpp_headers/keyring_module.o 00:03:59.565 CXX test/cpp_headers/json.o 00:03:59.565 CXX test/cpp_headers/keyring.o 00:03:59.565 CXX test/cpp_headers/likely.o 00:03:59.565 CXX test/cpp_headers/log.o 00:03:59.565 CXX test/cpp_headers/memory.o 00:03:59.565 CXX test/cpp_headers/lvol.o 00:03:59.565 CXX test/cpp_headers/mmio.o 00:03:59.565 CC examples/util/zipf/zipf.o 00:03:59.565 CXX test/cpp_headers/md5.o 00:03:59.565 CXX test/cpp_headers/nbd.o 00:03:59.565 CXX test/cpp_headers/nvme.o 00:03:59.565 CXX test/cpp_headers/net.o 00:03:59.565 CC test/thread/poller_perf/poller_perf.o 00:03:59.565 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:59.565 CXX test/cpp_headers/notify.o 00:03:59.565 CXX test/cpp_headers/nvme_intel.o 00:03:59.565 CXX test/cpp_headers/nvme_ocssd.o 00:03:59.565 CXX test/cpp_headers/nvme_zns.o 00:03:59.565 CXX test/cpp_headers/nvme_spec.o 00:03:59.565 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:59.565 CXX test/cpp_headers/nvmf_cmd.o 00:03:59.565 CXX test/cpp_headers/nvmf.o 00:03:59.565 LINK spdk_lspci 00:03:59.565 CXX test/cpp_headers/nvmf_spec.o 00:03:59.565 CC examples/ioat/verify/verify.o 00:03:59.565 CXX test/cpp_headers/opal_spec.o 00:03:59.565 CXX test/cpp_headers/nvmf_transport.o 00:03:59.565 CXX test/cpp_headers/pci_ids.o 00:03:59.565 CXX test/cpp_headers/opal.o 00:03:59.565 CXX test/cpp_headers/queue.o 00:03:59.565 CXX test/cpp_headers/reduce.o 00:03:59.565 CC test/app/jsoncat/jsoncat.o 00:03:59.565 CXX test/cpp_headers/scheduler.o 00:03:59.565 CC test/app/stub/stub.o 00:03:59.565 CXX test/cpp_headers/pipe.o 00:03:59.565 CXX test/cpp_headers/rpc.o 00:03:59.565 CXX test/cpp_headers/scsi_spec.o 00:03:59.565 CC test/env/pci/pci_ut.o 00:03:59.565 CXX test/cpp_headers/sock.o 00:03:59.565 CC examples/ioat/perf/perf.o 00:03:59.565 CXX test/cpp_headers/scsi.o 00:03:59.565 CXX test/cpp_headers/string.o 00:03:59.565 CXX test/cpp_headers/thread.o 00:03:59.565 CXX test/cpp_headers/stdinc.o 00:03:59.565 CXX test/cpp_headers/trace.o 00:03:59.565 CXX test/cpp_headers/trace_parser.o 00:03:59.565 CC test/env/memory/memory_ut.o 00:03:59.565 CXX test/cpp_headers/tree.o 00:03:59.565 CC test/app/histogram_perf/histogram_perf.o 00:03:59.565 CXX test/cpp_headers/ublk.o 00:03:59.565 CC test/env/vtophys/vtophys.o 00:03:59.565 CXX test/cpp_headers/util.o 00:03:59.565 CXX test/cpp_headers/uuid.o 00:03:59.565 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:59.565 CXX test/cpp_headers/version.o 00:03:59.565 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.565 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.565 CXX test/cpp_headers/vmd.o 00:03:59.565 CXX test/cpp_headers/vhost.o 00:03:59.565 CXX test/cpp_headers/zipf.o 00:03:59.565 CXX test/cpp_headers/xor.o 00:03:59.565 CC app/fio/nvme/fio_plugin.o 00:03:59.565 LINK rpc_client_test 00:03:59.565 CC test/app/bdev_svc/bdev_svc.o 00:03:59.565 CC test/dma/test_dma/test_dma.o 00:03:59.565 CC app/fio/bdev/fio_plugin.o 00:03:59.832 LINK spdk_nvme_discover 00:03:59.832 LINK nvmf_tgt 00:04:00.102 LINK spdk_trace_record 00:04:00.102 LINK iscsi_tgt 00:04:00.102 LINK interrupt_tgt 00:04:00.102 LINK jsoncat 00:04:00.102 LINK spdk_tgt 00:04:00.370 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:00.370 LINK spdk_dd 00:04:00.370 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.370 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.370 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.370 CC test/env/mem_callbacks/mem_callbacks.o 00:04:00.370 LINK spdk_trace 00:04:00.631 LINK vtophys 00:04:00.631 LINK env_dpdk_post_init 00:04:00.631 LINK zipf 00:04:00.631 LINK poller_perf 00:04:00.631 LINK histogram_perf 00:04:00.631 LINK verify 00:04:00.631 LINK stub 00:04:00.631 LINK ioat_perf 00:04:00.631 LINK bdev_svc 00:04:00.891 LINK pci_ut 00:04:00.891 CC app/vhost/vhost.o 00:04:00.891 LINK vhost_fuzz 00:04:00.891 LINK nvme_fuzz 00:04:00.891 LINK test_dma 00:04:00.891 LINK spdk_nvme 00:04:01.153 LINK spdk_bdev 00:04:01.153 LINK spdk_nvme_perf 00:04:01.153 LINK spdk_nvme_identify 00:04:01.153 CC examples/idxd/perf/perf.o 00:04:01.153 CC examples/sock/hello_world/hello_sock.o 00:04:01.153 CC examples/vmd/led/led.o 00:04:01.153 CC examples/vmd/lsvmd/lsvmd.o 00:04:01.153 LINK mem_callbacks 00:04:01.153 CC test/event/event_perf/event_perf.o 00:04:01.153 CC test/event/reactor/reactor.o 00:04:01.153 CC test/event/app_repeat/app_repeat.o 00:04:01.153 CC test/event/reactor_perf/reactor_perf.o 00:04:01.153 LINK spdk_top 00:04:01.153 CC examples/thread/thread/thread_ex.o 00:04:01.153 CC test/event/scheduler/scheduler.o 00:04:01.153 LINK vhost 00:04:01.415 LINK lsvmd 00:04:01.415 LINK led 00:04:01.415 LINK event_perf 00:04:01.415 LINK reactor 00:04:01.415 LINK reactor_perf 00:04:01.415 LINK app_repeat 00:04:01.415 LINK hello_sock 00:04:01.415 LINK scheduler 00:04:01.415 LINK idxd_perf 00:04:01.415 LINK thread 00:04:01.676 CC test/nvme/reset/reset.o 00:04:01.676 CC test/nvme/e2edp/nvme_dp.o 00:04:01.676 CC test/nvme/startup/startup.o 00:04:01.676 CC test/nvme/overhead/overhead.o 00:04:01.676 CC test/nvme/connect_stress/connect_stress.o 00:04:01.676 CC test/nvme/reserve/reserve.o 00:04:01.676 CC test/nvme/sgl/sgl.o 00:04:01.676 CC test/nvme/simple_copy/simple_copy.o 00:04:01.676 CC test/nvme/cuse/cuse.o 00:04:01.676 CC test/nvme/boot_partition/boot_partition.o 00:04:01.676 CC test/nvme/compliance/nvme_compliance.o 00:04:01.676 CC test/nvme/err_injection/err_injection.o 00:04:01.676 CC test/nvme/aer/aer.o 00:04:01.676 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.676 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.676 CC test/nvme/fdp/fdp.o 00:04:01.676 CC test/blobfs/mkfs/mkfs.o 00:04:01.676 LINK memory_ut 00:04:01.676 CC test/accel/dif/dif.o 00:04:01.938 CC test/lvol/esnap/esnap.o 00:04:01.938 LINK connect_stress 00:04:01.938 LINK startup 00:04:01.938 LINK boot_partition 00:04:01.938 LINK reserve 00:04:01.938 LINK fused_ordering 00:04:01.938 LINK err_injection 00:04:01.938 LINK doorbell_aers 00:04:01.938 LINK mkfs 00:04:01.938 LINK reset 00:04:01.938 LINK nvme_dp 00:04:01.938 LINK simple_copy 00:04:01.938 LINK sgl 00:04:01.938 CC examples/nvme/hotplug/hotplug.o 00:04:01.938 CC examples/nvme/arbitration/arbitration.o 00:04:01.938 CC examples/nvme/abort/abort.o 00:04:01.938 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.938 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:01.938 CC examples/nvme/hello_world/hello_world.o 00:04:01.938 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.938 CC examples/nvme/reconnect/reconnect.o 00:04:01.938 LINK overhead 00:04:01.938 LINK aer 00:04:01.938 LINK iscsi_fuzz 00:04:01.938 LINK nvme_compliance 00:04:01.938 LINK fdp 00:04:02.199 CC examples/accel/perf/accel_perf.o 00:04:02.199 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:02.199 LINK pmr_persistence 00:04:02.199 CC examples/blob/hello_world/hello_blob.o 00:04:02.199 CC examples/blob/cli/blobcli.o 00:04:02.199 LINK cmb_copy 00:04:02.199 LINK hotplug 00:04:02.199 LINK hello_world 00:04:02.460 LINK reconnect 00:04:02.460 LINK arbitration 00:04:02.460 LINK abort 00:04:02.460 LINK dif 00:04:02.460 LINK hello_fsdev 00:04:02.460 LINK nvme_manage 00:04:02.460 LINK hello_blob 00:04:02.723 LINK accel_perf 00:04:02.723 LINK blobcli 00:04:02.985 LINK cuse 00:04:02.985 CC test/bdev/bdevio/bdevio.o 00:04:03.245 CC examples/bdev/hello_world/hello_bdev.o 00:04:03.245 CC examples/bdev/bdevperf/bdevperf.o 00:04:03.507 LINK bdevio 00:04:03.507 LINK hello_bdev 00:04:04.081 LINK bdevperf 00:04:04.655 CC examples/nvmf/nvmf/nvmf.o 00:04:04.916 LINK nvmf 00:04:06.304 LINK esnap 00:04:06.565 00:04:06.565 real 0m56.214s 00:04:06.565 user 8m7.084s 00:04:06.565 sys 5m27.300s 00:04:06.565 10:21:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:06.565 10:21:38 make -- common/autotest_common.sh@10 -- $ set +x 00:04:06.565 ************************************ 00:04:06.565 END TEST make 00:04:06.565 ************************************ 00:04:06.565 10:21:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:06.565 10:21:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:06.565 10:21:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:06.565 10:21:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.565 10:21:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:06.565 10:21:38 -- pm/common@44 -- $ pid=1725351 00:04:06.565 10:21:38 -- pm/common@50 -- $ kill -TERM 1725351 00:04:06.565 10:21:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.565 10:21:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:06.565 10:21:38 -- pm/common@44 -- $ pid=1725352 00:04:06.565 10:21:38 -- pm/common@50 -- $ kill -TERM 1725352 00:04:06.565 10:21:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.565 10:21:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:06.565 10:21:38 -- pm/common@44 -- $ pid=1725354 00:04:06.565 10:21:38 -- pm/common@50 -- $ kill -TERM 1725354 00:04:06.565 10:21:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.565 10:21:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:06.565 10:21:38 -- pm/common@44 -- $ pid=1725378 00:04:06.565 10:21:38 -- pm/common@50 -- $ sudo -E kill -TERM 1725378 00:04:06.565 10:21:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:06.565 10:21:38 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:06.827 10:21:39 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.827 10:21:39 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.827 10:21:39 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.827 10:21:39 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.827 10:21:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.827 10:21:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.827 10:21:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.827 10:21:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.827 10:21:39 -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.827 10:21:39 -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.827 10:21:39 -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.827 10:21:39 -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.827 10:21:39 -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.827 10:21:39 -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.827 10:21:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.827 10:21:39 -- scripts/common.sh@344 -- # case "$op" in 00:04:06.827 10:21:39 -- scripts/common.sh@345 -- # : 1 00:04:06.827 10:21:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.827 10:21:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.827 10:21:39 -- scripts/common.sh@365 -- # decimal 1 00:04:06.827 10:21:39 -- scripts/common.sh@353 -- # local d=1 00:04:06.827 10:21:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.827 10:21:39 -- scripts/common.sh@355 -- # echo 1 00:04:06.827 10:21:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.827 10:21:39 -- scripts/common.sh@366 -- # decimal 2 00:04:06.827 10:21:39 -- scripts/common.sh@353 -- # local d=2 00:04:06.827 10:21:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.827 10:21:39 -- scripts/common.sh@355 -- # echo 2 00:04:06.827 10:21:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.827 10:21:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.827 10:21:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.827 10:21:39 -- scripts/common.sh@368 -- # return 0 00:04:06.827 10:21:39 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.827 10:21:39 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.827 --rc genhtml_branch_coverage=1 00:04:06.827 --rc genhtml_function_coverage=1 00:04:06.827 --rc genhtml_legend=1 00:04:06.827 --rc geninfo_all_blocks=1 00:04:06.827 --rc geninfo_unexecuted_blocks=1 00:04:06.827 00:04:06.827 ' 00:04:06.827 10:21:39 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.827 --rc genhtml_branch_coverage=1 00:04:06.827 --rc genhtml_function_coverage=1 00:04:06.827 --rc genhtml_legend=1 00:04:06.827 --rc geninfo_all_blocks=1 00:04:06.827 --rc geninfo_unexecuted_blocks=1 00:04:06.827 00:04:06.827 ' 00:04:06.827 10:21:39 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.827 --rc genhtml_branch_coverage=1 00:04:06.827 --rc genhtml_function_coverage=1 00:04:06.827 --rc genhtml_legend=1 00:04:06.827 --rc geninfo_all_blocks=1 00:04:06.827 --rc geninfo_unexecuted_blocks=1 00:04:06.827 00:04:06.827 ' 00:04:06.827 10:21:39 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.827 --rc genhtml_branch_coverage=1 00:04:06.827 --rc genhtml_function_coverage=1 00:04:06.827 --rc genhtml_legend=1 00:04:06.827 --rc geninfo_all_blocks=1 00:04:06.827 --rc geninfo_unexecuted_blocks=1 00:04:06.827 00:04:06.827 ' 00:04:06.827 10:21:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.827 10:21:39 -- nvmf/common.sh@7 -- # uname -s 00:04:06.827 10:21:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.827 10:21:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.827 10:21:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.827 10:21:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.827 10:21:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.827 10:21:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.827 10:21:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.827 10:21:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.827 10:21:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.827 10:21:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.827 10:21:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:06.827 10:21:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:06.827 10:21:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.827 10:21:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.827 10:21:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:06.827 10:21:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.827 10:21:39 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.827 10:21:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.827 10:21:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.827 10:21:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.827 10:21:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.827 10:21:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.827 10:21:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.827 10:21:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.827 10:21:39 -- paths/export.sh@5 -- # export PATH 00:04:06.827 10:21:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.827 10:21:39 -- nvmf/common.sh@51 -- # : 0 00:04:06.827 10:21:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.828 10:21:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.828 10:21:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.828 10:21:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.828 10:21:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.828 10:21:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.828 10:21:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.828 10:21:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.828 10:21:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.828 10:21:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:06.828 10:21:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:06.828 10:21:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:06.828 10:21:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:06.828 10:21:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:06.828 10:21:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:06.828 10:21:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:06.828 10:21:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:06.828 10:21:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:06.828 10:21:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:06.828 10:21:39 -- spdk/autotest.sh@48 -- # udevadm_pid=1791473 00:04:06.828 10:21:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:06.828 10:21:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:06.828 10:21:39 -- pm/common@17 -- # local monitor 00:04:06.828 10:21:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.828 10:21:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.828 10:21:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.828 10:21:39 -- pm/common@21 -- # date +%s 00:04:06.828 10:21:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.828 10:21:39 -- pm/common@25 -- # sleep 1 00:04:06.828 10:21:39 -- pm/common@21 -- # date +%s 00:04:06.828 10:21:39 -- pm/common@21 -- # date +%s 00:04:06.828 10:21:39 -- pm/common@21 -- # date +%s 00:04:06.828 10:21:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094499 00:04:06.828 10:21:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094499 00:04:06.828 10:21:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094499 00:04:06.828 10:21:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094499 00:04:07.088 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094499_collect-cpu-load.pm.log 00:04:07.088 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094499_collect-vmstat.pm.log 00:04:07.088 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094499_collect-cpu-temp.pm.log 00:04:07.088 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094499_collect-bmc-pm.bmc.pm.log 00:04:08.029 10:21:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.029 10:21:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.029 10:21:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.029 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:08.029 10:21:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.029 10:21:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:08.029 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:04:08.029 10:21:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:08.029 10:21:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.029 10:21:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.029 10:21:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.029 10:21:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.029 10:21:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.029 10:21:40 -- common/autotest_common.sh@1457 -- # uname 00:04:08.030 10:21:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:08.030 10:21:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.030 10:21:40 -- common/autotest_common.sh@1477 -- # uname 00:04:08.030 10:21:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:08.030 10:21:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.030 10:21:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.030 lcov: LCOV version 1.15 00:04:08.030 10:21:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:22.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:22.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:41.071 10:22:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:41.071 10:22:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.071 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.071 10:22:10 -- spdk/autotest.sh@78 -- # rm -f 00:04:41.071 10:22:10 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.643 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:41.643 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:41.904 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:41.904 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:42.166 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:42.166 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:42.166 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:42.166 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:42.427 10:22:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:42.427 10:22:14 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:42.427 10:22:14 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:42.427 10:22:14 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:42.427 10:22:14 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:42.427 10:22:14 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:42.427 10:22:14 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:42.427 10:22:14 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.427 10:22:14 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:42.427 10:22:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:42.427 10:22:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.427 10:22:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.427 10:22:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:42.427 10:22:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:42.427 10:22:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:42.427 No valid GPT data, bailing 00:04:42.427 10:22:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.427 10:22:14 -- scripts/common.sh@394 -- # pt= 00:04:42.427 10:22:14 -- scripts/common.sh@395 -- # return 1 00:04:42.427 10:22:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:42.427 1+0 records in 00:04:42.427 1+0 records out 00:04:42.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.001965 s, 534 MB/s 00:04:42.427 10:22:14 -- spdk/autotest.sh@105 -- # sync 00:04:42.427 10:22:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:42.427 10:22:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:42.427 10:22:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:52.431 10:22:23 -- spdk/autotest.sh@111 -- # uname -s 00:04:52.431 10:22:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:52.431 10:22:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:52.431 10:22:23 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.977 Hugepages 00:04:54.977 node hugesize free / total 00:04:54.977 node0 1048576kB 0 / 0 00:04:54.977 node0 2048kB 0 / 0 00:04:54.977 node1 1048576kB 0 / 0 00:04:54.977 node1 2048kB 0 / 0 00:04:54.977 00:04:54.977 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.977 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:54.977 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:54.977 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:54.977 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:54.977 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:54.977 10:22:27 -- spdk/autotest.sh@117 -- # uname -s 00:04:54.977 10:22:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:54.977 10:22:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:54.977 10:22:27 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.291 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.291 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.553 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.471 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.471 10:22:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:01.415 10:22:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:01.415 10:22:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:01.415 10:22:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.415 10:22:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:01.415 10:22:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.415 10:22:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.415 10:22:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.415 10:22:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.415 10:22:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.676 10:22:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:01.676 10:22:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:01.676 10:22:33 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.982 Waiting for block devices as requested 00:05:04.982 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.982 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.244 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.244 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.244 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.506 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.506 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.506 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:05.767 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:06.029 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:06.029 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:06.029 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:06.029 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:06.291 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:06.291 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:06.291 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:06.291 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.864 10:22:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:06.864 10:22:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:06.864 10:22:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:06.864 10:22:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:06.864 10:22:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:06.864 10:22:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:06.864 10:22:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:06.864 10:22:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:06.864 10:22:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:06.864 10:22:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:06.864 10:22:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:06.864 10:22:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:06.864 10:22:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:06.864 10:22:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:06.864 10:22:38 -- common/autotest_common.sh@1543 -- # continue 00:05:06.864 10:22:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:06.864 10:22:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.864 10:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:06.864 10:22:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:06.864 10:22:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.864 10:22:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.864 10:22:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.171 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:10.171 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:10.433 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:10.695 10:22:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:10.695 10:22:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.695 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.695 10:22:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:10.695 10:22:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:10.695 10:22:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.695 10:22:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:10.695 10:22:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:10.695 10:22:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:10.695 10:22:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:10.695 10:22:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:10.695 10:22:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.695 10:22:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.695 10:22:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.957 10:22:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.958 10:22:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.958 10:22:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:10.958 10:22:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:10.958 10:22:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:10.958 10:22:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:10.958 10:22:43 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:10.958 10:22:43 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:10.958 10:22:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:10.958 10:22:43 -- common/autotest_common.sh@1572 -- # return 0 00:05:10.958 10:22:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:10.958 10:22:43 -- common/autotest_common.sh@1580 -- # return 0 00:05:10.958 10:22:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:10.958 10:22:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:10.958 10:22:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.958 10:22:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.958 10:22:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:10.958 10:22:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.958 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.958 10:22:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:10.958 10:22:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.958 10:22:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.958 10:22:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.958 10:22:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.958 ************************************ 00:05:10.958 START TEST env 00:05:10.958 ************************************ 00:05:10.958 10:22:43 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.958 * Looking for test storage... 00:05:10.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.958 10:22:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.958 10:22:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.958 10:22:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.220 10:22:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.220 10:22:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.220 10:22:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.220 10:22:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.220 10:22:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.220 10:22:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.220 10:22:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.220 10:22:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.220 10:22:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.220 10:22:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.220 10:22:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.220 10:22:43 env -- scripts/common.sh@344 -- # case "$op" in 00:05:11.220 10:22:43 env -- scripts/common.sh@345 -- # : 1 00:05:11.220 10:22:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.220 10:22:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.220 10:22:43 env -- scripts/common.sh@365 -- # decimal 1 00:05:11.220 10:22:43 env -- scripts/common.sh@353 -- # local d=1 00:05:11.220 10:22:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.220 10:22:43 env -- scripts/common.sh@355 -- # echo 1 00:05:11.220 10:22:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.220 10:22:43 env -- scripts/common.sh@366 -- # decimal 2 00:05:11.220 10:22:43 env -- scripts/common.sh@353 -- # local d=2 00:05:11.220 10:22:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.220 10:22:43 env -- scripts/common.sh@355 -- # echo 2 00:05:11.220 10:22:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.220 10:22:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.220 10:22:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.220 10:22:43 env -- scripts/common.sh@368 -- # return 0 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.220 --rc genhtml_branch_coverage=1 00:05:11.220 --rc genhtml_function_coverage=1 00:05:11.220 --rc genhtml_legend=1 00:05:11.220 --rc geninfo_all_blocks=1 00:05:11.220 --rc geninfo_unexecuted_blocks=1 00:05:11.220 00:05:11.220 ' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.220 --rc genhtml_branch_coverage=1 00:05:11.220 --rc genhtml_function_coverage=1 00:05:11.220 --rc genhtml_legend=1 00:05:11.220 --rc geninfo_all_blocks=1 00:05:11.220 --rc geninfo_unexecuted_blocks=1 00:05:11.220 00:05:11.220 ' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.220 --rc genhtml_branch_coverage=1 00:05:11.220 --rc genhtml_function_coverage=1 00:05:11.220 --rc genhtml_legend=1 00:05:11.220 --rc geninfo_all_blocks=1 00:05:11.220 --rc geninfo_unexecuted_blocks=1 00:05:11.220 00:05:11.220 ' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.220 --rc genhtml_branch_coverage=1 00:05:11.220 --rc genhtml_function_coverage=1 00:05:11.220 --rc genhtml_legend=1 00:05:11.220 --rc geninfo_all_blocks=1 00:05:11.220 --rc geninfo_unexecuted_blocks=1 00:05:11.220 00:05:11.220 ' 00:05:11.220 10:22:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.220 10:22:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.220 10:22:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.220 ************************************ 00:05:11.220 START TEST env_memory 00:05:11.220 ************************************ 00:05:11.220 10:22:43 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.220 00:05:11.220 00:05:11.220 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.220 http://cunit.sourceforge.net/ 00:05:11.220 00:05:11.220 00:05:11.220 Suite: memory 00:05:11.220 Test: alloc and free memory map ...[2024-11-20 10:22:43.514380] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:11.220 passed 00:05:11.220 Test: mem map translation ...[2024-11-20 10:22:43.540138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:11.220 [2024-11-20 10:22:43.540173] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:11.220 [2024-11-20 10:22:43.540219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:11.220 [2024-11-20 10:22:43.540227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.220 passed 00:05:11.483 Test: mem map registration ...[2024-11-20 10:22:43.595491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:11.483 [2024-11-20 10:22:43.595513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:11.483 passed 00:05:11.483 Test: mem map adjacent registrations ...passed 00:05:11.483 00:05:11.483 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.483 suites 1 1 n/a 0 0 00:05:11.483 tests 4 4 4 0 0 00:05:11.483 asserts 152 152 152 0 n/a 00:05:11.483 00:05:11.483 Elapsed time = 0.193 seconds 00:05:11.483 00:05:11.483 real 0m0.208s 00:05:11.483 user 0m0.195s 00:05:11.483 sys 0m0.012s 00:05:11.483 10:22:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.483 10:22:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.483 ************************************ 00:05:11.483 END TEST env_memory 00:05:11.483 ************************************ 00:05:11.483 10:22:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.483 10:22:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.483 10:22:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.483 10:22:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.483 ************************************ 00:05:11.483 START TEST env_vtophys 00:05:11.483 ************************************ 00:05:11.483 10:22:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.483 EAL: lib.eal log level changed from notice to debug 00:05:11.483 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.483 EAL: Detected lcore 1 as core 1 on socket 0 00:05:11.483 EAL: Detected lcore 2 as core 2 on socket 0 00:05:11.483 EAL: Detected lcore 3 as core 3 on socket 0 00:05:11.483 EAL: Detected lcore 4 as core 4 on socket 0 00:05:11.483 EAL: Detected lcore 5 as core 5 on socket 0 00:05:11.483 EAL: Detected lcore 6 as core 6 on socket 0 00:05:11.483 EAL: Detected lcore 7 as core 7 on socket 0 00:05:11.483 EAL: Detected lcore 8 as core 8 on socket 0 00:05:11.483 EAL: Detected lcore 9 as core 9 on socket 0 00:05:11.483 EAL: Detected lcore 10 as core 10 on socket 0 00:05:11.483 EAL: Detected lcore 11 as core 11 on socket 0 00:05:11.483 EAL: Detected lcore 12 as core 12 on socket 0 00:05:11.483 EAL: Detected lcore 13 as core 13 on socket 0 00:05:11.483 EAL: Detected lcore 14 as core 14 on socket 0 00:05:11.483 EAL: Detected lcore 15 as core 15 on socket 0 00:05:11.483 EAL: Detected lcore 16 as core 16 on socket 0 00:05:11.483 EAL: Detected lcore 17 as core 17 on socket 0 00:05:11.483 EAL: Detected lcore 18 as core 18 on socket 0 00:05:11.483 EAL: Detected lcore 19 as core 19 on socket 0 00:05:11.483 EAL: Detected lcore 20 as core 20 on socket 0 00:05:11.483 EAL: Detected lcore 21 as core 21 on socket 0 00:05:11.484 EAL: Detected lcore 22 as core 22 on socket 0 00:05:11.484 EAL: Detected lcore 23 as core 23 on socket 0 00:05:11.484 EAL: Detected lcore 24 as core 24 on socket 0 00:05:11.484 EAL: Detected lcore 25 as core 25 on socket 0 00:05:11.484 EAL: Detected lcore 26 as core 26 on socket 0 00:05:11.484 EAL: Detected lcore 27 as core 27 on socket 0 00:05:11.484 EAL: Detected lcore 28 as core 28 on socket 0 00:05:11.484 EAL: Detected lcore 29 as core 29 on socket 0 00:05:11.484 EAL: Detected lcore 30 as core 30 on socket 0 00:05:11.484 EAL: Detected lcore 31 as core 31 on socket 0 00:05:11.484 EAL: Detected lcore 32 as core 32 on socket 0 00:05:11.484 EAL: Detected lcore 33 as core 33 on socket 0 00:05:11.484 EAL: Detected lcore 34 as core 34 on socket 0 00:05:11.484 EAL: Detected lcore 35 as core 35 on socket 0 00:05:11.484 EAL: Detected lcore 36 as core 0 on socket 1 00:05:11.484 EAL: Detected lcore 37 as core 1 on socket 1 00:05:11.484 EAL: Detected lcore 38 as core 2 on socket 1 00:05:11.484 EAL: Detected lcore 39 as core 3 on socket 1 00:05:11.484 EAL: Detected lcore 40 as core 4 on socket 1 00:05:11.484 EAL: Detected lcore 41 as core 5 on socket 1 00:05:11.484 EAL: Detected lcore 42 as core 6 on socket 1 00:05:11.484 EAL: Detected lcore 43 as core 7 on socket 1 00:05:11.484 EAL: Detected lcore 44 as core 8 on socket 1 00:05:11.484 EAL: Detected lcore 45 as core 9 on socket 1 00:05:11.484 EAL: Detected lcore 46 as core 10 on socket 1 00:05:11.484 EAL: Detected lcore 47 as core 11 on socket 1 00:05:11.484 EAL: Detected lcore 48 as core 12 on socket 1 00:05:11.484 EAL: Detected lcore 49 as core 13 on socket 1 00:05:11.484 EAL: Detected lcore 50 as core 14 on socket 1 00:05:11.484 EAL: Detected lcore 51 as core 15 on socket 1 00:05:11.484 EAL: Detected lcore 52 as core 16 on socket 1 00:05:11.484 EAL: Detected lcore 53 as core 17 on socket 1 00:05:11.484 EAL: Detected lcore 54 as core 18 on socket 1 00:05:11.484 EAL: Detected lcore 55 as core 19 on socket 1 00:05:11.484 EAL: Detected lcore 56 as core 20 on socket 1 00:05:11.484 EAL: Detected lcore 57 as core 21 on socket 1 00:05:11.484 EAL: Detected lcore 58 as core 22 on socket 1 00:05:11.484 EAL: Detected lcore 59 as core 23 on socket 1 00:05:11.484 EAL: Detected lcore 60 as core 24 on socket 1 00:05:11.484 EAL: Detected lcore 61 as core 25 on socket 1 00:05:11.484 EAL: Detected lcore 62 as core 26 on socket 1 00:05:11.484 EAL: Detected lcore 63 as core 27 on socket 1 00:05:11.484 EAL: Detected lcore 64 as core 28 on socket 1 00:05:11.484 EAL: Detected lcore 65 as core 29 on socket 1 00:05:11.484 EAL: Detected lcore 66 as core 30 on socket 1 00:05:11.484 EAL: Detected lcore 67 as core 31 on socket 1 00:05:11.484 EAL: Detected lcore 68 as core 32 on socket 1 00:05:11.484 EAL: Detected lcore 69 as core 33 on socket 1 00:05:11.484 EAL: Detected lcore 70 as core 34 on socket 1 00:05:11.484 EAL: Detected lcore 71 as core 35 on socket 1 00:05:11.484 EAL: Detected lcore 72 as core 0 on socket 0 00:05:11.484 EAL: Detected lcore 73 as core 1 on socket 0 00:05:11.484 EAL: Detected lcore 74 as core 2 on socket 0 00:05:11.484 EAL: Detected lcore 75 as core 3 on socket 0 00:05:11.484 EAL: Detected lcore 76 as core 4 on socket 0 00:05:11.484 EAL: Detected lcore 77 as core 5 on socket 0 00:05:11.484 EAL: Detected lcore 78 as core 6 on socket 0 00:05:11.484 EAL: Detected lcore 79 as core 7 on socket 0 00:05:11.484 EAL: Detected lcore 80 as core 8 on socket 0 00:05:11.484 EAL: Detected lcore 81 as core 9 on socket 0 00:05:11.484 EAL: Detected lcore 82 as core 10 on socket 0 00:05:11.484 EAL: Detected lcore 83 as core 11 on socket 0 00:05:11.484 EAL: Detected lcore 84 as core 12 on socket 0 00:05:11.484 EAL: Detected lcore 85 as core 13 on socket 0 00:05:11.484 EAL: Detected lcore 86 as core 14 on socket 0 00:05:11.484 EAL: Detected lcore 87 as core 15 on socket 0 00:05:11.484 EAL: Detected lcore 88 as core 16 on socket 0 00:05:11.484 EAL: Detected lcore 89 as core 17 on socket 0 00:05:11.484 EAL: Detected lcore 90 as core 18 on socket 0 00:05:11.484 EAL: Detected lcore 91 as core 19 on socket 0 00:05:11.484 EAL: Detected lcore 92 as core 20 on socket 0 00:05:11.484 EAL: Detected lcore 93 as core 21 on socket 0 00:05:11.484 EAL: Detected lcore 94 as core 22 on socket 0 00:05:11.484 EAL: Detected lcore 95 as core 23 on socket 0 00:05:11.484 EAL: Detected lcore 96 as core 24 on socket 0 00:05:11.484 EAL: Detected lcore 97 as core 25 on socket 0 00:05:11.484 EAL: Detected lcore 98 as core 26 on socket 0 00:05:11.484 EAL: Detected lcore 99 as core 27 on socket 0 00:05:11.484 EAL: Detected lcore 100 as core 28 on socket 0 00:05:11.484 EAL: Detected lcore 101 as core 29 on socket 0 00:05:11.484 EAL: Detected lcore 102 as core 30 on socket 0 00:05:11.484 EAL: Detected lcore 103 as core 31 on socket 0 00:05:11.484 EAL: Detected lcore 104 as core 32 on socket 0 00:05:11.484 EAL: Detected lcore 105 as core 33 on socket 0 00:05:11.484 EAL: Detected lcore 106 as core 34 on socket 0 00:05:11.484 EAL: Detected lcore 107 as core 35 on socket 0 00:05:11.484 EAL: Detected lcore 108 as core 0 on socket 1 00:05:11.484 EAL: Detected lcore 109 as core 1 on socket 1 00:05:11.484 EAL: Detected lcore 110 as core 2 on socket 1 00:05:11.484 EAL: Detected lcore 111 as core 3 on socket 1 00:05:11.484 EAL: Detected lcore 112 as core 4 on socket 1 00:05:11.484 EAL: Detected lcore 113 as core 5 on socket 1 00:05:11.484 EAL: Detected lcore 114 as core 6 on socket 1 00:05:11.484 EAL: Detected lcore 115 as core 7 on socket 1 00:05:11.484 EAL: Detected lcore 116 as core 8 on socket 1 00:05:11.484 EAL: Detected lcore 117 as core 9 on socket 1 00:05:11.484 EAL: Detected lcore 118 as core 10 on socket 1 00:05:11.484 EAL: Detected lcore 119 as core 11 on socket 1 00:05:11.484 EAL: Detected lcore 120 as core 12 on socket 1 00:05:11.484 EAL: Detected lcore 121 as core 13 on socket 1 00:05:11.484 EAL: Detected lcore 122 as core 14 on socket 1 00:05:11.484 EAL: Detected lcore 123 as core 15 on socket 1 00:05:11.484 EAL: Detected lcore 124 as core 16 on socket 1 00:05:11.484 EAL: Detected lcore 125 as core 17 on socket 1 00:05:11.484 EAL: Detected lcore 126 as core 18 on socket 1 00:05:11.484 EAL: Detected lcore 127 as core 19 on socket 1 00:05:11.484 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:11.484 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:11.484 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:11.484 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:11.484 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:11.484 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:11.484 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:11.484 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:11.484 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:11.484 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:11.484 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:11.484 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:11.484 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:11.484 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:11.484 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:11.484 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:11.484 EAL: Maximum logical cores by configuration: 128 00:05:11.484 EAL: Detected CPU lcores: 128 00:05:11.484 EAL: Detected NUMA nodes: 2 00:05:11.484 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:11.484 EAL: Detected shared linkage of DPDK 00:05:11.484 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.484 EAL: Bus pci wants IOVA as 'DC' 00:05:11.484 EAL: Buses did not request a specific IOVA mode. 00:05:11.484 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:11.484 EAL: Selected IOVA mode 'VA' 00:05:11.484 EAL: Probing VFIO support... 00:05:11.484 EAL: IOMMU type 1 (Type 1) is supported 00:05:11.484 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:11.484 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:11.484 EAL: VFIO support initialized 00:05:11.484 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.484 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.484 EAL: Setting up physically contiguous memory... 00:05:11.484 EAL: Setting maximum number of open files to 524288 00:05:11.484 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.484 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:11.484 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.484 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.484 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.484 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.484 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.484 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.484 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.484 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.484 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.485 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:11.485 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.485 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:11.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.485 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.485 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:11.485 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:11.485 EAL: Hugepages will be freed exactly as allocated. 00:05:11.485 EAL: No shared files mode enabled, IPC is disabled 00:05:11.485 EAL: No shared files mode enabled, IPC is disabled 00:05:11.485 EAL: TSC frequency is ~2400000 KHz 00:05:11.485 EAL: Main lcore 0 is ready (tid=7ff1e5f3ea00;cpuset=[0]) 00:05:11.485 EAL: Trying to obtain current memory policy. 00:05:11.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.485 EAL: Restoring previous memory policy: 0 00:05:11.485 EAL: request: mp_malloc_sync 00:05:11.485 EAL: No shared files mode enabled, IPC is disabled 00:05:11.485 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.485 EAL: No shared files mode enabled, IPC is disabled 00:05:11.485 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.485 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.747 00:05:11.747 00:05:11.747 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.747 http://cunit.sourceforge.net/ 00:05:11.747 00:05:11.747 00:05:11.747 Suite: components_suite 00:05:11.747 Test: vtophys_malloc_test ...passed 00:05:11.747 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.747 EAL: Restoring previous memory policy: 4 00:05:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.747 EAL: request: mp_malloc_sync 00:05:11.747 EAL: No shared files mode enabled, IPC is disabled 00:05:11.747 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.747 EAL: request: mp_malloc_sync 00:05:11.747 EAL: No shared files mode enabled, IPC is disabled 00:05:11.747 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.747 EAL: Trying to obtain current memory policy. 00:05:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.747 EAL: Restoring previous memory policy: 4 00:05:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.747 EAL: request: mp_malloc_sync 00:05:11.747 EAL: No shared files mode enabled, IPC is disabled 00:05:11.747 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.747 EAL: request: mp_malloc_sync 00:05:11.747 EAL: No shared files mode enabled, IPC is disabled 00:05:11.747 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.747 EAL: Trying to obtain current memory policy. 00:05:11.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.747 EAL: Restoring previous memory policy: 4 00:05:11.747 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.747 EAL: request: mp_malloc_sync 00:05:11.747 EAL: No shared files mode enabled, IPC is disabled 00:05:11.747 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.748 EAL: Restoring previous memory policy: 4 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.748 EAL: Restoring previous memory policy: 4 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.748 EAL: Restoring previous memory policy: 4 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.748 EAL: Restoring previous memory policy: 4 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.748 EAL: Restoring previous memory policy: 4 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.748 EAL: request: mp_malloc_sync 00:05:11.748 EAL: No shared files mode enabled, IPC is disabled 00:05:11.748 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.748 EAL: Trying to obtain current memory policy. 00:05:11.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.010 EAL: Restoring previous memory policy: 4 00:05:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.010 EAL: request: mp_malloc_sync 00:05:12.010 EAL: No shared files mode enabled, IPC is disabled 00:05:12.010 EAL: Heap on socket 0 was expanded by 514MB 00:05:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.010 EAL: request: mp_malloc_sync 00:05:12.010 EAL: No shared files mode enabled, IPC is disabled 00:05:12.010 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.010 EAL: Trying to obtain current memory policy. 00:05:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.271 EAL: Restoring previous memory policy: 4 00:05:12.271 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.271 EAL: request: mp_malloc_sync 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.271 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.271 EAL: request: mp_malloc_sync 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.271 passed 00:05:12.271 00:05:12.271 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.271 suites 1 1 n/a 0 0 00:05:12.271 tests 2 2 2 0 0 00:05:12.271 asserts 497 497 497 0 n/a 00:05:12.271 00:05:12.271 Elapsed time = 0.687 seconds 00:05:12.271 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.271 EAL: request: mp_malloc_sync 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 EAL: No shared files mode enabled, IPC is disabled 00:05:12.271 00:05:12.271 real 0m0.835s 00:05:12.271 user 0m0.436s 00:05:12.271 sys 0m0.373s 00:05:12.271 10:22:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.271 10:22:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.271 ************************************ 00:05:12.271 END TEST env_vtophys 00:05:12.271 ************************************ 00:05:12.271 10:22:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.271 10:22:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.271 10:22:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.271 10:22:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.533 ************************************ 00:05:12.533 START TEST env_pci 00:05:12.533 ************************************ 00:05:12.533 10:22:44 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.533 00:05:12.533 00:05:12.533 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.533 http://cunit.sourceforge.net/ 00:05:12.533 00:05:12.533 00:05:12.533 Suite: pci 00:05:12.533 Test: pci_hook ...[2024-11-20 10:22:44.685769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1810774 has claimed it 00:05:12.533 EAL: Cannot find device (10000:00:01.0) 00:05:12.533 EAL: Failed to attach device on primary process 00:05:12.533 passed 00:05:12.533 00:05:12.533 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.533 suites 1 1 n/a 0 0 00:05:12.533 tests 1 1 1 0 0 00:05:12.533 asserts 25 25 25 0 n/a 00:05:12.533 00:05:12.533 Elapsed time = 0.031 seconds 00:05:12.533 00:05:12.533 real 0m0.052s 00:05:12.533 user 0m0.019s 00:05:12.533 sys 0m0.033s 00:05:12.533 10:22:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.533 10:22:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.533 ************************************ 00:05:12.533 END TEST env_pci 00:05:12.533 ************************************ 00:05:12.533 10:22:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.533 10:22:44 env -- env/env.sh@15 -- # uname 00:05:12.533 10:22:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.533 10:22:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.533 10:22:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.533 10:22:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:12.533 10:22:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.533 10:22:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.533 ************************************ 00:05:12.533 START TEST env_dpdk_post_init 00:05:12.533 ************************************ 00:05:12.533 10:22:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.533 EAL: Detected CPU lcores: 128 00:05:12.533 EAL: Detected NUMA nodes: 2 00:05:12.533 EAL: Detected shared linkage of DPDK 00:05:12.533 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.533 EAL: Selected IOVA mode 'VA' 00:05:12.533 EAL: VFIO support initialized 00:05:12.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.795 EAL: Using IOMMU type 1 (Type 1) 00:05:12.795 EAL: Ignore mapping IO port bar(1) 00:05:13.057 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:13.057 EAL: Ignore mapping IO port bar(1) 00:05:13.057 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:13.318 EAL: Ignore mapping IO port bar(1) 00:05:13.318 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:13.579 EAL: Ignore mapping IO port bar(1) 00:05:13.579 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:13.841 EAL: Ignore mapping IO port bar(1) 00:05:13.841 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:13.841 EAL: Ignore mapping IO port bar(1) 00:05:14.102 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:14.102 EAL: Ignore mapping IO port bar(1) 00:05:14.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:14.363 EAL: Ignore mapping IO port bar(1) 00:05:14.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:14.624 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:14.885 EAL: Ignore mapping IO port bar(1) 00:05:14.885 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:15.148 EAL: Ignore mapping IO port bar(1) 00:05:15.148 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:15.409 EAL: Ignore mapping IO port bar(1) 00:05:15.409 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:15.409 EAL: Ignore mapping IO port bar(1) 00:05:15.671 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:15.671 EAL: Ignore mapping IO port bar(1) 00:05:15.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:15.932 EAL: Ignore mapping IO port bar(1) 00:05:16.193 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:16.193 EAL: Ignore mapping IO port bar(1) 00:05:16.193 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:16.454 EAL: Ignore mapping IO port bar(1) 00:05:16.454 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:16.454 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:16.454 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:16.716 Starting DPDK initialization... 00:05:16.716 Starting SPDK post initialization... 00:05:16.716 SPDK NVMe probe 00:05:16.716 Attaching to 0000:65:00.0 00:05:16.716 Attached to 0000:65:00.0 00:05:16.716 Cleaning up... 00:05:18.632 00:05:18.632 real 0m5.744s 00:05:18.632 user 0m0.101s 00:05:18.632 sys 0m0.201s 00:05:18.632 10:22:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.632 10:22:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 END TEST env_dpdk_post_init 00:05:18.632 ************************************ 00:05:18.632 10:22:50 env -- env/env.sh@26 -- # uname 00:05:18.632 10:22:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.632 10:22:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.632 10:22:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.632 10:22:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.632 10:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 START TEST env_mem_callbacks 00:05:18.632 ************************************ 00:05:18.632 10:22:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.632 EAL: Detected CPU lcores: 128 00:05:18.632 EAL: Detected NUMA nodes: 2 00:05:18.632 EAL: Detected shared linkage of DPDK 00:05:18.632 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.632 EAL: Selected IOVA mode 'VA' 00:05:18.632 EAL: VFIO support initialized 00:05:18.632 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.632 00:05:18.632 00:05:18.632 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.632 http://cunit.sourceforge.net/ 00:05:18.632 00:05:18.632 00:05:18.632 Suite: memory 00:05:18.632 Test: test ... 00:05:18.632 register 0x200000200000 2097152 00:05:18.632 malloc 3145728 00:05:18.632 register 0x200000400000 4194304 00:05:18.632 buf 0x200000500000 len 3145728 PASSED 00:05:18.632 malloc 64 00:05:18.632 buf 0x2000004fff40 len 64 PASSED 00:05:18.632 malloc 4194304 00:05:18.632 register 0x200000800000 6291456 00:05:18.632 buf 0x200000a00000 len 4194304 PASSED 00:05:18.632 free 0x200000500000 3145728 00:05:18.632 free 0x2000004fff40 64 00:05:18.632 unregister 0x200000400000 4194304 PASSED 00:05:18.632 free 0x200000a00000 4194304 00:05:18.632 unregister 0x200000800000 6291456 PASSED 00:05:18.632 malloc 8388608 00:05:18.632 register 0x200000400000 10485760 00:05:18.632 buf 0x200000600000 len 8388608 PASSED 00:05:18.632 free 0x200000600000 8388608 00:05:18.632 unregister 0x200000400000 10485760 PASSED 00:05:18.632 passed 00:05:18.632 00:05:18.632 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.632 suites 1 1 n/a 0 0 00:05:18.632 tests 1 1 1 0 0 00:05:18.632 asserts 15 15 15 0 n/a 00:05:18.632 00:05:18.632 Elapsed time = 0.010 seconds 00:05:18.632 00:05:18.632 real 0m0.070s 00:05:18.632 user 0m0.023s 00:05:18.632 sys 0m0.046s 00:05:18.632 10:22:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.632 10:22:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 END TEST env_mem_callbacks 00:05:18.632 ************************************ 00:05:18.632 00:05:18.632 real 0m7.533s 00:05:18.632 user 0m1.057s 00:05:18.632 sys 0m1.043s 00:05:18.632 10:22:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.632 10:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 END TEST env 00:05:18.632 ************************************ 00:05:18.632 10:22:50 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.632 10:22:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.632 10:22:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.632 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 START TEST rpc 00:05:18.632 ************************************ 00:05:18.632 10:22:50 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.632 * Looking for test storage... 00:05:18.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.632 10:22:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.632 10:22:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.632 10:22:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.895 10:22:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.895 10:22:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.895 10:22:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.895 10:22:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.895 10:22:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.895 10:22:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.895 10:22:51 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.895 10:22:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.895 10:22:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.895 10:22:51 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.895 10:22:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.895 10:22:51 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.895 10:22:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.895 10:22:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.895 10:22:51 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.895 --rc genhtml_branch_coverage=1 00:05:18.895 --rc genhtml_function_coverage=1 00:05:18.895 --rc genhtml_legend=1 00:05:18.895 --rc geninfo_all_blocks=1 00:05:18.895 --rc geninfo_unexecuted_blocks=1 00:05:18.895 00:05:18.895 ' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.895 --rc genhtml_branch_coverage=1 00:05:18.895 --rc genhtml_function_coverage=1 00:05:18.895 --rc genhtml_legend=1 00:05:18.895 --rc geninfo_all_blocks=1 00:05:18.895 --rc geninfo_unexecuted_blocks=1 00:05:18.895 00:05:18.895 ' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.895 --rc genhtml_branch_coverage=1 00:05:18.895 --rc genhtml_function_coverage=1 00:05:18.895 --rc genhtml_legend=1 00:05:18.895 --rc geninfo_all_blocks=1 00:05:18.895 --rc geninfo_unexecuted_blocks=1 00:05:18.895 00:05:18.895 ' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.895 --rc genhtml_branch_coverage=1 00:05:18.895 --rc genhtml_function_coverage=1 00:05:18.895 --rc genhtml_legend=1 00:05:18.895 --rc geninfo_all_blocks=1 00:05:18.895 --rc geninfo_unexecuted_blocks=1 00:05:18.895 00:05:18.895 ' 00:05:18.895 10:22:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1812203 00:05:18.895 10:22:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.895 10:22:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1812203 00:05:18.895 10:22:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@835 -- # '[' -z 1812203 ']' 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.895 10:22:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.895 [2024-11-20 10:22:51.103030] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:18.895 [2024-11-20 10:22:51.103098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812203 ] 00:05:18.895 [2024-11-20 10:22:51.197554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.895 [2024-11-20 10:22:51.249127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.895 [2024-11-20 10:22:51.249195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1812203' to capture a snapshot of events at runtime. 00:05:18.895 [2024-11-20 10:22:51.249203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.895 [2024-11-20 10:22:51.249229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.895 [2024-11-20 10:22:51.249236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1812203 for offline analysis/debug. 00:05:18.895 [2024-11-20 10:22:51.249994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.842 10:22:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.842 10:22:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:19.842 10:22:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.842 10:22:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.842 10:22:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.842 10:22:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.842 10:22:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.842 10:22:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.842 10:22:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 ************************************ 00:05:19.842 START TEST rpc_integrity 00:05:19.842 ************************************ 00:05:19.842 10:22:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:19.842 10:22:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.842 10:22:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.842 10:22:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 10:22:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.842 10:22:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.842 10:22:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.842 { 00:05:19.842 "name": "Malloc0", 00:05:19.842 "aliases": [ 00:05:19.842 "901ebd30-7025-49a5-ae47-9e51fef80874" 00:05:19.842 ], 00:05:19.842 "product_name": "Malloc disk", 00:05:19.842 "block_size": 512, 00:05:19.842 "num_blocks": 16384, 00:05:19.842 "uuid": "901ebd30-7025-49a5-ae47-9e51fef80874", 00:05:19.842 "assigned_rate_limits": { 00:05:19.842 "rw_ios_per_sec": 0, 00:05:19.842 "rw_mbytes_per_sec": 0, 00:05:19.842 "r_mbytes_per_sec": 0, 00:05:19.842 "w_mbytes_per_sec": 0 00:05:19.842 }, 00:05:19.842 "claimed": false, 00:05:19.842 "zoned": false, 00:05:19.842 "supported_io_types": { 00:05:19.842 "read": true, 00:05:19.842 "write": true, 00:05:19.842 "unmap": true, 00:05:19.842 "flush": true, 00:05:19.842 "reset": true, 00:05:19.842 "nvme_admin": false, 00:05:19.842 "nvme_io": false, 00:05:19.842 "nvme_io_md": false, 00:05:19.842 "write_zeroes": true, 00:05:19.842 "zcopy": true, 00:05:19.842 "get_zone_info": false, 00:05:19.842 "zone_management": false, 00:05:19.842 "zone_append": false, 00:05:19.842 "compare": false, 00:05:19.842 "compare_and_write": false, 00:05:19.842 "abort": true, 00:05:19.842 "seek_hole": false, 00:05:19.842 "seek_data": false, 00:05:19.842 "copy": true, 00:05:19.842 "nvme_iov_md": false 00:05:19.842 }, 00:05:19.842 "memory_domains": [ 00:05:19.842 { 00:05:19.842 "dma_device_id": "system", 00:05:19.842 "dma_device_type": 1 00:05:19.842 }, 00:05:19.842 { 00:05:19.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.842 "dma_device_type": 2 00:05:19.842 } 00:05:19.842 ], 00:05:19.842 "driver_specific": {} 00:05:19.842 } 00:05:19.842 ]' 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 [2024-11-20 10:22:52.116671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.842 [2024-11-20 10:22:52.116721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.842 [2024-11-20 10:22:52.116739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1990db0 00:05:19.842 [2024-11-20 10:22:52.116747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.842 [2024-11-20 10:22:52.118354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.842 [2024-11-20 10:22:52.118393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.842 Passthru0 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.842 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.842 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.842 { 00:05:19.842 "name": "Malloc0", 00:05:19.842 "aliases": [ 00:05:19.842 "901ebd30-7025-49a5-ae47-9e51fef80874" 00:05:19.842 ], 00:05:19.842 "product_name": "Malloc disk", 00:05:19.842 "block_size": 512, 00:05:19.842 "num_blocks": 16384, 00:05:19.842 "uuid": "901ebd30-7025-49a5-ae47-9e51fef80874", 00:05:19.842 "assigned_rate_limits": { 00:05:19.842 "rw_ios_per_sec": 0, 00:05:19.842 "rw_mbytes_per_sec": 0, 00:05:19.842 "r_mbytes_per_sec": 0, 00:05:19.842 "w_mbytes_per_sec": 0 00:05:19.842 }, 00:05:19.842 "claimed": true, 00:05:19.842 "claim_type": "exclusive_write", 00:05:19.842 "zoned": false, 00:05:19.842 "supported_io_types": { 00:05:19.842 "read": true, 00:05:19.842 "write": true, 00:05:19.842 "unmap": true, 00:05:19.842 "flush": true, 00:05:19.842 "reset": true, 00:05:19.842 "nvme_admin": false, 00:05:19.842 "nvme_io": false, 00:05:19.842 "nvme_io_md": false, 00:05:19.842 "write_zeroes": true, 00:05:19.842 "zcopy": true, 00:05:19.842 "get_zone_info": false, 00:05:19.842 "zone_management": false, 00:05:19.842 "zone_append": false, 00:05:19.842 "compare": false, 00:05:19.842 "compare_and_write": false, 00:05:19.842 "abort": true, 00:05:19.842 "seek_hole": false, 00:05:19.842 "seek_data": false, 00:05:19.842 "copy": true, 00:05:19.842 "nvme_iov_md": false 00:05:19.842 }, 00:05:19.842 "memory_domains": [ 00:05:19.842 { 00:05:19.842 "dma_device_id": "system", 00:05:19.842 "dma_device_type": 1 00:05:19.842 }, 00:05:19.842 { 00:05:19.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.842 "dma_device_type": 2 00:05:19.842 } 00:05:19.842 ], 00:05:19.842 "driver_specific": {} 00:05:19.842 }, 00:05:19.842 { 00:05:19.842 "name": "Passthru0", 00:05:19.842 "aliases": [ 00:05:19.842 "5f42e0be-5cd8-5de3-bcd3-2822963baf19" 00:05:19.842 ], 00:05:19.842 "product_name": "passthru", 00:05:19.842 "block_size": 512, 00:05:19.842 "num_blocks": 16384, 00:05:19.842 "uuid": "5f42e0be-5cd8-5de3-bcd3-2822963baf19", 00:05:19.842 "assigned_rate_limits": { 00:05:19.842 "rw_ios_per_sec": 0, 00:05:19.842 "rw_mbytes_per_sec": 0, 00:05:19.842 "r_mbytes_per_sec": 0, 00:05:19.842 "w_mbytes_per_sec": 0 00:05:19.842 }, 00:05:19.842 "claimed": false, 00:05:19.842 "zoned": false, 00:05:19.842 "supported_io_types": { 00:05:19.842 "read": true, 00:05:19.842 "write": true, 00:05:19.842 "unmap": true, 00:05:19.842 "flush": true, 00:05:19.842 "reset": true, 00:05:19.842 "nvme_admin": false, 00:05:19.842 "nvme_io": false, 00:05:19.842 "nvme_io_md": false, 00:05:19.842 "write_zeroes": true, 00:05:19.842 "zcopy": true, 00:05:19.842 "get_zone_info": false, 00:05:19.842 "zone_management": false, 00:05:19.842 "zone_append": false, 00:05:19.842 "compare": false, 00:05:19.842 "compare_and_write": false, 00:05:19.842 "abort": true, 00:05:19.842 "seek_hole": false, 00:05:19.842 "seek_data": false, 00:05:19.842 "copy": true, 00:05:19.842 "nvme_iov_md": false 00:05:19.842 }, 00:05:19.843 "memory_domains": [ 00:05:19.843 { 00:05:19.843 "dma_device_id": "system", 00:05:19.843 "dma_device_type": 1 00:05:19.843 }, 00:05:19.843 { 00:05:19.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.843 "dma_device_type": 2 00:05:19.843 } 00:05:19.843 ], 00:05:19.843 "driver_specific": { 00:05:19.843 "passthru": { 00:05:19.843 "name": "Passthru0", 00:05:19.843 "base_bdev_name": "Malloc0" 00:05:19.843 } 00:05:19.843 } 00:05:19.843 } 00:05:19.843 ]' 00:05:19.843 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.843 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.843 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.843 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.843 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.843 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.843 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.843 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.843 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.105 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.105 10:22:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.105 00:05:20.105 real 0m0.307s 00:05:20.105 user 0m0.197s 00:05:20.105 sys 0m0.042s 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 ************************************ 00:05:20.105 END TEST rpc_integrity 00:05:20.105 ************************************ 00:05:20.105 10:22:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.105 10:22:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.105 10:22:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.105 10:22:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 ************************************ 00:05:20.105 START TEST rpc_plugins 00:05:20.105 ************************************ 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.105 { 00:05:20.105 "name": "Malloc1", 00:05:20.105 "aliases": [ 00:05:20.105 "dc306b8a-dec4-4b18-b625-bd4e4fe3440a" 00:05:20.105 ], 00:05:20.105 "product_name": "Malloc disk", 00:05:20.105 "block_size": 4096, 00:05:20.105 "num_blocks": 256, 00:05:20.105 "uuid": "dc306b8a-dec4-4b18-b625-bd4e4fe3440a", 00:05:20.105 "assigned_rate_limits": { 00:05:20.105 "rw_ios_per_sec": 0, 00:05:20.105 "rw_mbytes_per_sec": 0, 00:05:20.105 "r_mbytes_per_sec": 0, 00:05:20.105 "w_mbytes_per_sec": 0 00:05:20.105 }, 00:05:20.105 "claimed": false, 00:05:20.105 "zoned": false, 00:05:20.105 "supported_io_types": { 00:05:20.105 "read": true, 00:05:20.105 "write": true, 00:05:20.105 "unmap": true, 00:05:20.105 "flush": true, 00:05:20.105 "reset": true, 00:05:20.105 "nvme_admin": false, 00:05:20.105 "nvme_io": false, 00:05:20.105 "nvme_io_md": false, 00:05:20.105 "write_zeroes": true, 00:05:20.105 "zcopy": true, 00:05:20.105 "get_zone_info": false, 00:05:20.105 "zone_management": false, 00:05:20.105 "zone_append": false, 00:05:20.105 "compare": false, 00:05:20.105 "compare_and_write": false, 00:05:20.105 "abort": true, 00:05:20.105 "seek_hole": false, 00:05:20.105 "seek_data": false, 00:05:20.105 "copy": true, 00:05:20.105 "nvme_iov_md": false 00:05:20.105 }, 00:05:20.105 "memory_domains": [ 00:05:20.105 { 00:05:20.105 "dma_device_id": "system", 00:05:20.105 "dma_device_type": 1 00:05:20.105 }, 00:05:20.105 { 00:05:20.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.105 "dma_device_type": 2 00:05:20.105 } 00:05:20.105 ], 00:05:20.105 "driver_specific": {} 00:05:20.105 } 00:05:20.105 ]' 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.105 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.105 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.367 10:22:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.367 00:05:20.367 real 0m0.157s 00:05:20.367 user 0m0.092s 00:05:20.367 sys 0m0.025s 00:05:20.367 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.367 10:22:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.367 ************************************ 00:05:20.367 END TEST rpc_plugins 00:05:20.367 ************************************ 00:05:20.367 10:22:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.367 10:22:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.367 10:22:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.367 10:22:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.367 ************************************ 00:05:20.367 START TEST rpc_trace_cmd_test 00:05:20.367 ************************************ 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.367 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1812203", 00:05:20.367 "tpoint_group_mask": "0x8", 00:05:20.367 "iscsi_conn": { 00:05:20.367 "mask": "0x2", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "scsi": { 00:05:20.367 "mask": "0x4", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "bdev": { 00:05:20.367 "mask": "0x8", 00:05:20.367 "tpoint_mask": "0xffffffffffffffff" 00:05:20.367 }, 00:05:20.367 "nvmf_rdma": { 00:05:20.367 "mask": "0x10", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "nvmf_tcp": { 00:05:20.367 "mask": "0x20", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "ftl": { 00:05:20.367 "mask": "0x40", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "blobfs": { 00:05:20.367 "mask": "0x80", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "dsa": { 00:05:20.367 "mask": "0x200", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "thread": { 00:05:20.367 "mask": "0x400", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "nvme_pcie": { 00:05:20.367 "mask": "0x800", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "iaa": { 00:05:20.367 "mask": "0x1000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "nvme_tcp": { 00:05:20.367 "mask": "0x2000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "bdev_nvme": { 00:05:20.367 "mask": "0x4000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "sock": { 00:05:20.367 "mask": "0x8000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "blob": { 00:05:20.367 "mask": "0x10000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "bdev_raid": { 00:05:20.367 "mask": "0x20000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 }, 00:05:20.367 "scheduler": { 00:05:20.367 "mask": "0x40000", 00:05:20.367 "tpoint_mask": "0x0" 00:05:20.367 } 00:05:20.367 }' 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.367 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.368 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.629 00:05:20.629 real 0m0.250s 00:05:20.629 user 0m0.213s 00:05:20.629 sys 0m0.030s 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.629 10:22:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.629 ************************************ 00:05:20.629 END TEST rpc_trace_cmd_test 00:05:20.629 ************************************ 00:05:20.629 10:22:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.629 10:22:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.629 10:22:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.629 10:22:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.629 10:22:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.629 10:22:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.629 ************************************ 00:05:20.629 START TEST rpc_daemon_integrity 00:05:20.629 ************************************ 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.629 10:22:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.892 { 00:05:20.892 "name": "Malloc2", 00:05:20.892 "aliases": [ 00:05:20.892 "27c18175-940e-4fd8-83af-1e9f165dd0eb" 00:05:20.892 ], 00:05:20.892 "product_name": "Malloc disk", 00:05:20.892 "block_size": 512, 00:05:20.892 "num_blocks": 16384, 00:05:20.892 "uuid": "27c18175-940e-4fd8-83af-1e9f165dd0eb", 00:05:20.892 "assigned_rate_limits": { 00:05:20.892 "rw_ios_per_sec": 0, 00:05:20.892 "rw_mbytes_per_sec": 0, 00:05:20.892 "r_mbytes_per_sec": 0, 00:05:20.892 "w_mbytes_per_sec": 0 00:05:20.892 }, 00:05:20.892 "claimed": false, 00:05:20.892 "zoned": false, 00:05:20.892 "supported_io_types": { 00:05:20.892 "read": true, 00:05:20.892 "write": true, 00:05:20.892 "unmap": true, 00:05:20.892 "flush": true, 00:05:20.892 "reset": true, 00:05:20.892 "nvme_admin": false, 00:05:20.892 "nvme_io": false, 00:05:20.892 "nvme_io_md": false, 00:05:20.892 "write_zeroes": true, 00:05:20.892 "zcopy": true, 00:05:20.892 "get_zone_info": false, 00:05:20.892 "zone_management": false, 00:05:20.892 "zone_append": false, 00:05:20.892 "compare": false, 00:05:20.892 "compare_and_write": false, 00:05:20.892 "abort": true, 00:05:20.892 "seek_hole": false, 00:05:20.892 "seek_data": false, 00:05:20.892 "copy": true, 00:05:20.892 "nvme_iov_md": false 00:05:20.892 }, 00:05:20.892 "memory_domains": [ 00:05:20.892 { 00:05:20.892 "dma_device_id": "system", 00:05:20.892 "dma_device_type": 1 00:05:20.892 }, 00:05:20.892 { 00:05:20.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.892 "dma_device_type": 2 00:05:20.892 } 00:05:20.892 ], 00:05:20.892 "driver_specific": {} 00:05:20.892 } 00:05:20.892 ]' 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 [2024-11-20 10:22:53.083475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.892 [2024-11-20 10:22:53.083522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.892 [2024-11-20 10:22:53.083539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac18d0 00:05:20.892 [2024-11-20 10:22:53.083547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.892 [2024-11-20 10:22:53.084998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.892 [2024-11-20 10:22:53.085034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.892 Passthru0 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.892 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.892 { 00:05:20.892 "name": "Malloc2", 00:05:20.892 "aliases": [ 00:05:20.892 "27c18175-940e-4fd8-83af-1e9f165dd0eb" 00:05:20.892 ], 00:05:20.893 "product_name": "Malloc disk", 00:05:20.893 "block_size": 512, 00:05:20.893 "num_blocks": 16384, 00:05:20.893 "uuid": "27c18175-940e-4fd8-83af-1e9f165dd0eb", 00:05:20.893 "assigned_rate_limits": { 00:05:20.893 "rw_ios_per_sec": 0, 00:05:20.893 "rw_mbytes_per_sec": 0, 00:05:20.893 "r_mbytes_per_sec": 0, 00:05:20.893 "w_mbytes_per_sec": 0 00:05:20.893 }, 00:05:20.893 "claimed": true, 00:05:20.893 "claim_type": "exclusive_write", 00:05:20.893 "zoned": false, 00:05:20.893 "supported_io_types": { 00:05:20.893 "read": true, 00:05:20.893 "write": true, 00:05:20.893 "unmap": true, 00:05:20.893 "flush": true, 00:05:20.893 "reset": true, 00:05:20.893 "nvme_admin": false, 00:05:20.893 "nvme_io": false, 00:05:20.893 "nvme_io_md": false, 00:05:20.893 "write_zeroes": true, 00:05:20.893 "zcopy": true, 00:05:20.893 "get_zone_info": false, 00:05:20.893 "zone_management": false, 00:05:20.893 "zone_append": false, 00:05:20.893 "compare": false, 00:05:20.893 "compare_and_write": false, 00:05:20.893 "abort": true, 00:05:20.893 "seek_hole": false, 00:05:20.893 "seek_data": false, 00:05:20.893 "copy": true, 00:05:20.893 "nvme_iov_md": false 00:05:20.893 }, 00:05:20.893 "memory_domains": [ 00:05:20.893 { 00:05:20.893 "dma_device_id": "system", 00:05:20.893 "dma_device_type": 1 00:05:20.893 }, 00:05:20.893 { 00:05:20.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.893 "dma_device_type": 2 00:05:20.893 } 00:05:20.893 ], 00:05:20.893 "driver_specific": {} 00:05:20.893 }, 00:05:20.893 { 00:05:20.893 "name": "Passthru0", 00:05:20.893 "aliases": [ 00:05:20.893 "0fe9fd00-3951-5982-a620-0e762ac5b47b" 00:05:20.893 ], 00:05:20.893 "product_name": "passthru", 00:05:20.893 "block_size": 512, 00:05:20.893 "num_blocks": 16384, 00:05:20.893 "uuid": "0fe9fd00-3951-5982-a620-0e762ac5b47b", 00:05:20.893 "assigned_rate_limits": { 00:05:20.893 "rw_ios_per_sec": 0, 00:05:20.893 "rw_mbytes_per_sec": 0, 00:05:20.893 "r_mbytes_per_sec": 0, 00:05:20.893 "w_mbytes_per_sec": 0 00:05:20.893 }, 00:05:20.893 "claimed": false, 00:05:20.893 "zoned": false, 00:05:20.893 "supported_io_types": { 00:05:20.893 "read": true, 00:05:20.893 "write": true, 00:05:20.893 "unmap": true, 00:05:20.893 "flush": true, 00:05:20.893 "reset": true, 00:05:20.893 "nvme_admin": false, 00:05:20.893 "nvme_io": false, 00:05:20.893 "nvme_io_md": false, 00:05:20.893 "write_zeroes": true, 00:05:20.893 "zcopy": true, 00:05:20.893 "get_zone_info": false, 00:05:20.893 "zone_management": false, 00:05:20.893 "zone_append": false, 00:05:20.893 "compare": false, 00:05:20.893 "compare_and_write": false, 00:05:20.893 "abort": true, 00:05:20.893 "seek_hole": false, 00:05:20.893 "seek_data": false, 00:05:20.893 "copy": true, 00:05:20.893 "nvme_iov_md": false 00:05:20.893 }, 00:05:20.893 "memory_domains": [ 00:05:20.893 { 00:05:20.893 "dma_device_id": "system", 00:05:20.893 "dma_device_type": 1 00:05:20.893 }, 00:05:20.893 { 00:05:20.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.893 "dma_device_type": 2 00:05:20.893 } 00:05:20.893 ], 00:05:20.893 "driver_specific": { 00:05:20.893 "passthru": { 00:05:20.893 "name": "Passthru0", 00:05:20.893 "base_bdev_name": "Malloc2" 00:05:20.893 } 00:05:20.893 } 00:05:20.893 } 00:05:20.893 ]' 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.893 00:05:20.893 real 0m0.306s 00:05:20.893 user 0m0.194s 00:05:20.893 sys 0m0.044s 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.893 10:22:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.893 ************************************ 00:05:20.893 END TEST rpc_daemon_integrity 00:05:20.893 ************************************ 00:05:21.154 10:22:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.154 10:22:53 rpc -- rpc/rpc.sh@84 -- # killprocess 1812203 00:05:21.154 10:22:53 rpc -- common/autotest_common.sh@954 -- # '[' -z 1812203 ']' 00:05:21.154 10:22:53 rpc -- common/autotest_common.sh@958 -- # kill -0 1812203 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812203 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812203' 00:05:21.155 killing process with pid 1812203 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@973 -- # kill 1812203 00:05:21.155 10:22:53 rpc -- common/autotest_common.sh@978 -- # wait 1812203 00:05:21.415 00:05:21.415 real 0m2.757s 00:05:21.415 user 0m3.543s 00:05:21.415 sys 0m0.836s 00:05:21.415 10:22:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.415 10:22:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.415 ************************************ 00:05:21.415 END TEST rpc 00:05:21.415 ************************************ 00:05:21.415 10:22:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.415 10:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.415 10:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.415 10:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:21.415 ************************************ 00:05:21.415 START TEST skip_rpc 00:05:21.415 ************************************ 00:05:21.415 10:22:53 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.415 * Looking for test storage... 00:05:21.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.416 10:22:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.416 10:22:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.416 10:22:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.677 10:22:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.677 --rc genhtml_branch_coverage=1 00:05:21.677 --rc genhtml_function_coverage=1 00:05:21.677 --rc genhtml_legend=1 00:05:21.677 --rc geninfo_all_blocks=1 00:05:21.677 --rc geninfo_unexecuted_blocks=1 00:05:21.677 00:05:21.677 ' 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.677 --rc genhtml_branch_coverage=1 00:05:21.677 --rc genhtml_function_coverage=1 00:05:21.677 --rc genhtml_legend=1 00:05:21.677 --rc geninfo_all_blocks=1 00:05:21.677 --rc geninfo_unexecuted_blocks=1 00:05:21.677 00:05:21.677 ' 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.677 --rc genhtml_branch_coverage=1 00:05:21.677 --rc genhtml_function_coverage=1 00:05:21.677 --rc genhtml_legend=1 00:05:21.677 --rc geninfo_all_blocks=1 00:05:21.677 --rc geninfo_unexecuted_blocks=1 00:05:21.677 00:05:21.677 ' 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.677 --rc genhtml_branch_coverage=1 00:05:21.677 --rc genhtml_function_coverage=1 00:05:21.677 --rc genhtml_legend=1 00:05:21.677 --rc geninfo_all_blocks=1 00:05:21.677 --rc geninfo_unexecuted_blocks=1 00:05:21.677 00:05:21.677 ' 00:05:21.677 10:22:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.677 10:22:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.677 10:22:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.677 10:22:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.677 ************************************ 00:05:21.677 START TEST skip_rpc 00:05:21.677 ************************************ 00:05:21.677 10:22:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:21.677 10:22:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1813047 00:05:21.677 10:22:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.677 10:22:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.677 10:22:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.677 [2024-11-20 10:22:53.978751] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:21.677 [2024-11-20 10:22:53.978812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813047 ] 00:05:21.939 [2024-11-20 10:22:54.069192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.939 [2024-11-20 10:22:54.121294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1813047 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1813047 ']' 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1813047 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813047 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813047' 00:05:27.242 killing process with pid 1813047 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1813047 00:05:27.242 10:22:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1813047 00:05:27.242 00:05:27.242 real 0m5.263s 00:05:27.242 user 0m5.025s 00:05:27.242 sys 0m0.286s 00:05:27.242 10:22:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.242 10:22:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.242 ************************************ 00:05:27.242 END TEST skip_rpc 00:05:27.242 ************************************ 00:05:27.242 10:22:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:27.242 10:22:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.242 10:22:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.242 10:22:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.242 ************************************ 00:05:27.242 START TEST skip_rpc_with_json 00:05:27.242 ************************************ 00:05:27.242 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:27.242 10:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:27.242 10:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1814091 00:05:27.242 10:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.242 10:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1814091 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1814091 ']' 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.243 10:22:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.243 [2024-11-20 10:22:59.320374] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:27.243 [2024-11-20 10:22:59.320432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814091 ] 00:05:27.243 [2024-11-20 10:22:59.406529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.243 [2024-11-20 10:22:59.446253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.885 [2024-11-20 10:23:00.135231] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.885 request: 00:05:27.885 { 00:05:27.885 "trtype": "tcp", 00:05:27.885 "method": "nvmf_get_transports", 00:05:27.885 "req_id": 1 00:05:27.885 } 00:05:27.885 Got JSON-RPC error response 00:05:27.885 response: 00:05:27.885 { 00:05:27.885 "code": -19, 00:05:27.885 "message": "No such device" 00:05:27.885 } 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.885 [2024-11-20 10:23:00.147332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.885 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.176 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.176 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.176 { 00:05:28.176 "subsystems": [ 00:05:28.177 { 00:05:28.177 "subsystem": "fsdev", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "fsdev_set_opts", 00:05:28.177 "params": { 00:05:28.177 "fsdev_io_pool_size": 65535, 00:05:28.177 "fsdev_io_cache_size": 256 00:05:28.177 } 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "vfio_user_target", 00:05:28.177 "config": null 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "keyring", 00:05:28.177 "config": [] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "iobuf", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "iobuf_set_options", 00:05:28.177 "params": { 00:05:28.177 "small_pool_count": 8192, 00:05:28.177 "large_pool_count": 1024, 00:05:28.177 "small_bufsize": 8192, 00:05:28.177 "large_bufsize": 135168, 00:05:28.177 "enable_numa": false 00:05:28.177 } 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "sock", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "sock_set_default_impl", 00:05:28.177 "params": { 00:05:28.177 "impl_name": "posix" 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "sock_impl_set_options", 00:05:28.177 "params": { 00:05:28.177 "impl_name": "ssl", 00:05:28.177 "recv_buf_size": 4096, 00:05:28.177 "send_buf_size": 4096, 00:05:28.177 "enable_recv_pipe": true, 00:05:28.177 "enable_quickack": false, 00:05:28.177 "enable_placement_id": 0, 00:05:28.177 "enable_zerocopy_send_server": true, 00:05:28.177 "enable_zerocopy_send_client": false, 00:05:28.177 "zerocopy_threshold": 0, 00:05:28.177 "tls_version": 0, 00:05:28.177 "enable_ktls": false 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "sock_impl_set_options", 00:05:28.177 "params": { 00:05:28.177 "impl_name": "posix", 00:05:28.177 "recv_buf_size": 2097152, 00:05:28.177 "send_buf_size": 2097152, 00:05:28.177 "enable_recv_pipe": true, 00:05:28.177 "enable_quickack": false, 00:05:28.177 "enable_placement_id": 0, 00:05:28.177 "enable_zerocopy_send_server": true, 00:05:28.177 "enable_zerocopy_send_client": false, 00:05:28.177 "zerocopy_threshold": 0, 00:05:28.177 "tls_version": 0, 00:05:28.177 "enable_ktls": false 00:05:28.177 } 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "vmd", 00:05:28.177 "config": [] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "accel", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "accel_set_options", 00:05:28.177 "params": { 00:05:28.177 "small_cache_size": 128, 00:05:28.177 "large_cache_size": 16, 00:05:28.177 "task_count": 2048, 00:05:28.177 "sequence_count": 2048, 00:05:28.177 "buf_count": 2048 00:05:28.177 } 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "bdev", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "bdev_set_options", 00:05:28.177 "params": { 00:05:28.177 "bdev_io_pool_size": 65535, 00:05:28.177 "bdev_io_cache_size": 256, 00:05:28.177 "bdev_auto_examine": true, 00:05:28.177 "iobuf_small_cache_size": 128, 00:05:28.177 "iobuf_large_cache_size": 16 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "bdev_raid_set_options", 00:05:28.177 "params": { 00:05:28.177 "process_window_size_kb": 1024, 00:05:28.177 "process_max_bandwidth_mb_sec": 0 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "bdev_iscsi_set_options", 00:05:28.177 "params": { 00:05:28.177 "timeout_sec": 30 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "bdev_nvme_set_options", 00:05:28.177 "params": { 00:05:28.177 "action_on_timeout": "none", 00:05:28.177 "timeout_us": 0, 00:05:28.177 "timeout_admin_us": 0, 00:05:28.177 "keep_alive_timeout_ms": 10000, 00:05:28.177 "arbitration_burst": 0, 00:05:28.177 "low_priority_weight": 0, 00:05:28.177 "medium_priority_weight": 0, 00:05:28.177 "high_priority_weight": 0, 00:05:28.177 "nvme_adminq_poll_period_us": 10000, 00:05:28.177 "nvme_ioq_poll_period_us": 0, 00:05:28.177 "io_queue_requests": 0, 00:05:28.177 "delay_cmd_submit": true, 00:05:28.177 "transport_retry_count": 4, 00:05:28.177 "bdev_retry_count": 3, 00:05:28.177 "transport_ack_timeout": 0, 00:05:28.177 "ctrlr_loss_timeout_sec": 0, 00:05:28.177 "reconnect_delay_sec": 0, 00:05:28.177 "fast_io_fail_timeout_sec": 0, 00:05:28.177 "disable_auto_failback": false, 00:05:28.177 "generate_uuids": false, 00:05:28.177 "transport_tos": 0, 00:05:28.177 "nvme_error_stat": false, 00:05:28.177 "rdma_srq_size": 0, 00:05:28.177 "io_path_stat": false, 00:05:28.177 "allow_accel_sequence": false, 00:05:28.177 "rdma_max_cq_size": 0, 00:05:28.177 "rdma_cm_event_timeout_ms": 0, 00:05:28.177 "dhchap_digests": [ 00:05:28.177 "sha256", 00:05:28.177 "sha384", 00:05:28.177 "sha512" 00:05:28.177 ], 00:05:28.177 "dhchap_dhgroups": [ 00:05:28.177 "null", 00:05:28.177 "ffdhe2048", 00:05:28.177 "ffdhe3072", 00:05:28.177 "ffdhe4096", 00:05:28.177 "ffdhe6144", 00:05:28.177 "ffdhe8192" 00:05:28.177 ] 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "bdev_nvme_set_hotplug", 00:05:28.177 "params": { 00:05:28.177 "period_us": 100000, 00:05:28.177 "enable": false 00:05:28.177 } 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "method": "bdev_wait_for_examine" 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "scsi", 00:05:28.177 "config": null 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "scheduler", 00:05:28.177 "config": [ 00:05:28.177 { 00:05:28.177 "method": "framework_set_scheduler", 00:05:28.177 "params": { 00:05:28.177 "name": "static" 00:05:28.177 } 00:05:28.177 } 00:05:28.177 ] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "vhost_scsi", 00:05:28.177 "config": [] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "vhost_blk", 00:05:28.177 "config": [] 00:05:28.177 }, 00:05:28.177 { 00:05:28.177 "subsystem": "ublk", 00:05:28.177 "config": [] 00:05:28.177 }, 00:05:28.178 { 00:05:28.178 "subsystem": "nbd", 00:05:28.178 "config": [] 00:05:28.178 }, 00:05:28.178 { 00:05:28.178 "subsystem": "nvmf", 00:05:28.178 "config": [ 00:05:28.178 { 00:05:28.178 "method": "nvmf_set_config", 00:05:28.178 "params": { 00:05:28.178 "discovery_filter": "match_any", 00:05:28.178 "admin_cmd_passthru": { 00:05:28.178 "identify_ctrlr": false 00:05:28.178 }, 00:05:28.178 "dhchap_digests": [ 00:05:28.178 "sha256", 00:05:28.178 "sha384", 00:05:28.178 "sha512" 00:05:28.178 ], 00:05:28.178 "dhchap_dhgroups": [ 00:05:28.178 "null", 00:05:28.178 "ffdhe2048", 00:05:28.178 "ffdhe3072", 00:05:28.178 "ffdhe4096", 00:05:28.178 "ffdhe6144", 00:05:28.178 "ffdhe8192" 00:05:28.178 ] 00:05:28.178 } 00:05:28.178 }, 00:05:28.178 { 00:05:28.178 "method": "nvmf_set_max_subsystems", 00:05:28.178 "params": { 00:05:28.178 "max_subsystems": 1024 00:05:28.178 } 00:05:28.178 }, 00:05:28.178 { 00:05:28.178 "method": "nvmf_set_crdt", 00:05:28.178 "params": { 00:05:28.178 "crdt1": 0, 00:05:28.178 "crdt2": 0, 00:05:28.178 "crdt3": 0 00:05:28.178 } 00:05:28.178 }, 00:05:28.178 { 00:05:28.178 "method": "nvmf_create_transport", 00:05:28.178 "params": { 00:05:28.178 "trtype": "TCP", 00:05:28.178 "max_queue_depth": 128, 00:05:28.178 "max_io_qpairs_per_ctrlr": 127, 00:05:28.178 "in_capsule_data_size": 4096, 00:05:28.178 "max_io_size": 131072, 00:05:28.178 "io_unit_size": 131072, 00:05:28.178 "max_aq_depth": 128, 00:05:28.178 "num_shared_buffers": 511, 00:05:28.178 "buf_cache_size": 4294967295, 00:05:28.178 "dif_insert_or_strip": false, 00:05:28.178 "zcopy": false, 00:05:28.178 "c2h_success": true, 00:05:28.178 "sock_priority": 0, 00:05:28.178 "abort_timeout_sec": 1, 00:05:28.178 "ack_timeout": 0, 00:05:28.178 "data_wr_pool_size": 0 00:05:28.178 } 00:05:28.178 } 00:05:28.178 ] 00:05:28.178 }, 00:05:28.178 { 00:05:28.178 "subsystem": "iscsi", 00:05:28.178 "config": [ 00:05:28.178 { 00:05:28.178 "method": "iscsi_set_options", 00:05:28.178 "params": { 00:05:28.178 "node_base": "iqn.2016-06.io.spdk", 00:05:28.178 "max_sessions": 128, 00:05:28.178 "max_connections_per_session": 2, 00:05:28.178 "max_queue_depth": 64, 00:05:28.178 "default_time2wait": 2, 00:05:28.178 "default_time2retain": 20, 00:05:28.178 "first_burst_length": 8192, 00:05:28.178 "immediate_data": true, 00:05:28.178 "allow_duplicated_isid": false, 00:05:28.178 "error_recovery_level": 0, 00:05:28.178 "nop_timeout": 60, 00:05:28.178 "nop_in_interval": 30, 00:05:28.178 "disable_chap": false, 00:05:28.178 "require_chap": false, 00:05:28.178 "mutual_chap": false, 00:05:28.178 "chap_group": 0, 00:05:28.178 "max_large_datain_per_connection": 64, 00:05:28.178 "max_r2t_per_connection": 4, 00:05:28.178 "pdu_pool_size": 36864, 00:05:28.178 "immediate_data_pool_size": 16384, 00:05:28.178 "data_out_pool_size": 2048 00:05:28.178 } 00:05:28.178 } 00:05:28.178 ] 00:05:28.178 } 00:05:28.178 ] 00:05:28.178 } 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1814091 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1814091 ']' 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1814091 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814091 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814091' 00:05:28.178 killing process with pid 1814091 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1814091 00:05:28.178 10:23:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1814091 00:05:28.512 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1814438 00:05:28.512 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.512 10:23:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1814438 ']' 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814438' 00:05:33.800 killing process with pid 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1814438 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.800 00:05:33.800 real 0m6.589s 00:05:33.800 user 0m6.462s 00:05:33.800 sys 0m0.616s 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.800 ************************************ 00:05:33.800 END TEST skip_rpc_with_json 00:05:33.800 ************************************ 00:05:33.800 10:23:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:33.800 10:23:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.800 10:23:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.800 10:23:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.800 ************************************ 00:05:33.800 START TEST skip_rpc_with_delay 00:05:33.800 ************************************ 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.800 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.801 [2024-11-20 10:23:05.986808] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.801 00:05:33.801 real 0m0.076s 00:05:33.801 user 0m0.044s 00:05:33.801 sys 0m0.030s 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.801 10:23:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:33.801 ************************************ 00:05:33.801 END TEST skip_rpc_with_delay 00:05:33.801 ************************************ 00:05:33.801 10:23:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:33.801 10:23:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:33.801 10:23:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:33.801 10:23:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.801 10:23:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.801 10:23:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.801 ************************************ 00:05:33.801 START TEST exit_on_failed_rpc_init 00:05:33.801 ************************************ 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1815503 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1815503 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1815503 ']' 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.801 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.801 [2024-11-20 10:23:06.138769] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:33.801 [2024-11-20 10:23:06.138816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815503 ] 00:05:34.062 [2024-11-20 10:23:06.220749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.062 [2024-11-20 10:23:06.251196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.634 10:23:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.634 [2024-11-20 10:23:06.999633] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:34.634 [2024-11-20 10:23:06.999685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815665 ] 00:05:34.896 [2024-11-20 10:23:07.089362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.896 [2024-11-20 10:23:07.125457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.896 [2024-11-20 10:23:07.125507] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.896 [2024-11-20 10:23:07.125517] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.896 [2024-11-20 10:23:07.125524] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1815503 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1815503 ']' 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1815503 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815503 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815503' 00:05:34.896 killing process with pid 1815503 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1815503 00:05:34.896 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1815503 00:05:35.157 00:05:35.157 real 0m1.332s 00:05:35.157 user 0m1.583s 00:05:35.157 sys 0m0.361s 00:05:35.157 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.157 10:23:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 ************************************ 00:05:35.157 END TEST exit_on_failed_rpc_init 00:05:35.157 ************************************ 00:05:35.157 10:23:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.157 00:05:35.157 real 0m13.784s 00:05:35.157 user 0m13.355s 00:05:35.157 sys 0m1.607s 00:05:35.157 10:23:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.157 10:23:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 ************************************ 00:05:35.157 END TEST skip_rpc 00:05:35.157 ************************************ 00:05:35.157 10:23:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.157 10:23:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.157 10:23:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.157 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 ************************************ 00:05:35.418 START TEST rpc_client 00:05:35.418 ************************************ 00:05:35.418 10:23:07 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.418 * Looking for test storage... 00:05:35.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:35.418 10:23:07 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.418 10:23:07 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.418 10:23:07 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.418 10:23:07 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.418 10:23:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.419 10:23:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.419 --rc genhtml_branch_coverage=1 00:05:35.419 --rc genhtml_function_coverage=1 00:05:35.419 --rc genhtml_legend=1 00:05:35.419 --rc geninfo_all_blocks=1 00:05:35.419 --rc geninfo_unexecuted_blocks=1 00:05:35.419 00:05:35.419 ' 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.419 --rc genhtml_branch_coverage=1 00:05:35.419 --rc genhtml_function_coverage=1 00:05:35.419 --rc genhtml_legend=1 00:05:35.419 --rc geninfo_all_blocks=1 00:05:35.419 --rc geninfo_unexecuted_blocks=1 00:05:35.419 00:05:35.419 ' 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.419 --rc genhtml_branch_coverage=1 00:05:35.419 --rc genhtml_function_coverage=1 00:05:35.419 --rc genhtml_legend=1 00:05:35.419 --rc geninfo_all_blocks=1 00:05:35.419 --rc geninfo_unexecuted_blocks=1 00:05:35.419 00:05:35.419 ' 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.419 --rc genhtml_branch_coverage=1 00:05:35.419 --rc genhtml_function_coverage=1 00:05:35.419 --rc genhtml_legend=1 00:05:35.419 --rc geninfo_all_blocks=1 00:05:35.419 --rc geninfo_unexecuted_blocks=1 00:05:35.419 00:05:35.419 ' 00:05:35.419 10:23:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:35.419 OK 00:05:35.419 10:23:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.419 00:05:35.419 real 0m0.229s 00:05:35.419 user 0m0.134s 00:05:35.419 sys 0m0.109s 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.419 10:23:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.419 ************************************ 00:05:35.419 END TEST rpc_client 00:05:35.419 ************************************ 00:05:35.681 10:23:07 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.681 10:23:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.681 10:23:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.681 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:35.681 ************************************ 00:05:35.681 START TEST json_config 00:05:35.681 ************************************ 00:05:35.681 10:23:07 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.681 10:23:07 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.681 10:23:07 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.681 10:23:07 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.681 10:23:07 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.681 10:23:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.681 10:23:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.681 10:23:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.681 10:23:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.681 10:23:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.681 10:23:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.681 10:23:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.681 10:23:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.681 10:23:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.681 10:23:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.681 10:23:07 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.681 10:23:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.681 10:23:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.681 10:23:07 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.681 10:23:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.681 10:23:07 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.681 10:23:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.681 10:23:08 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.681 10:23:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.681 10:23:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.681 10:23:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.681 10:23:08 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.681 10:23:08 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.681 10:23:08 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 10:23:08 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 10:23:08 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 10:23:08 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 10:23:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.681 10:23:08 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.681 10:23:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.681 10:23:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.681 10:23:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.681 10:23:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.681 10:23:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.681 10:23:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.682 10:23:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.682 10:23:08 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.682 10:23:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.682 10:23:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:35.682 INFO: JSON configuration test init 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:35.682 10:23:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.682 10:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.682 10:23:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:35.682 10:23:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.682 10:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.943 10:23:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.943 10:23:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.943 10:23:08 json_config -- json_config/common.sh@10 -- # shift 00:05:35.943 10:23:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.943 10:23:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.943 10:23:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.943 10:23:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.943 10:23:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.943 10:23:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1815978 00:05:35.943 10:23:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.943 Waiting for target to run... 00:05:35.943 10:23:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1815978 /var/tmp/spdk_tgt.sock 00:05:35.943 10:23:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 1815978 ']' 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.943 10:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.943 [2024-11-20 10:23:08.115672] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:35.943 [2024-11-20 10:23:08.115728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815978 ] 00:05:36.205 [2024-11-20 10:23:08.366904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.205 [2024-11-20 10:23:08.392309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.776 10:23:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.777 10:23:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:36.777 10:23:08 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.777 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:36.777 10:23:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.777 10:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:36.777 10:23:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.777 10:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.777 10:23:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:36.777 10:23:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.347 10:23:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.347 10:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:37.347 10:23:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:37.347 10:23:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@54 -- # sort 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:37.348 10:23:09 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:37.348 10:23:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.348 10:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:37.609 10:23:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.609 10:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.609 10:23:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.609 MallocForNvmf0 00:05:37.609 10:23:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.609 10:23:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.871 MallocForNvmf1 00:05:37.871 10:23:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.871 10:23:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.132 [2024-11-20 10:23:10.245397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.132 10:23:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.132 10:23:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.132 10:23:10 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.132 10:23:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.393 10:23:10 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.393 10:23:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.654 10:23:10 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.654 10:23:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.654 [2024-11-20 10:23:10.951546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.654 10:23:10 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:38.654 10:23:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.654 10:23:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.654 10:23:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:38.654 10:23:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.654 10:23:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.915 10:23:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:38.915 10:23:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.915 10:23:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.915 MallocBdevForConfigChangeCheck 00:05:38.915 10:23:11 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:38.915 10:23:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.915 10:23:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.915 10:23:11 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:38.915 10:23:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.487 10:23:11 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:39.487 INFO: shutting down applications... 00:05:39.487 10:23:11 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:39.487 10:23:11 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:39.487 10:23:11 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:39.487 10:23:11 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.748 Calling clear_iscsi_subsystem 00:05:39.748 Calling clear_nvmf_subsystem 00:05:39.748 Calling clear_nbd_subsystem 00:05:39.748 Calling clear_ublk_subsystem 00:05:39.748 Calling clear_vhost_blk_subsystem 00:05:39.748 Calling clear_vhost_scsi_subsystem 00:05:39.748 Calling clear_bdev_subsystem 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.748 10:23:12 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.320 10:23:12 json_config -- json_config/json_config.sh@352 -- # break 00:05:40.320 10:23:12 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:40.320 10:23:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:40.320 10:23:12 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.320 10:23:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.320 10:23:12 json_config -- json_config/common.sh@35 -- # [[ -n 1815978 ]] 00:05:40.320 10:23:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1815978 00:05:40.320 10:23:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.320 10:23:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.321 10:23:12 json_config -- json_config/common.sh@41 -- # kill -0 1815978 00:05:40.321 10:23:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.582 10:23:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.582 10:23:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.582 10:23:12 json_config -- json_config/common.sh@41 -- # kill -0 1815978 00:05:40.582 10:23:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.582 10:23:12 json_config -- json_config/common.sh@43 -- # break 00:05:40.582 10:23:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.582 10:23:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.582 SPDK target shutdown done 00:05:40.582 10:23:12 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:40.582 INFO: relaunching applications... 00:05:40.582 10:23:12 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.582 10:23:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.582 10:23:12 json_config -- json_config/common.sh@10 -- # shift 00:05:40.582 10:23:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.582 10:23:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.582 10:23:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.582 10:23:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.582 10:23:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.582 10:23:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1817115 00:05:40.582 10:23:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.582 Waiting for target to run... 00:05:40.582 10:23:12 json_config -- json_config/common.sh@25 -- # waitforlisten 1817115 /var/tmp/spdk_tgt.sock 00:05:40.582 10:23:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 1817115 ']' 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.582 10:23:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.842 [2024-11-20 10:23:12.986681] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:40.842 [2024-11-20 10:23:12.986742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817115 ] 00:05:41.102 [2024-11-20 10:23:13.302163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.102 [2024-11-20 10:23:13.326686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.675 [2024-11-20 10:23:13.828098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.675 [2024-11-20 10:23:13.860571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.675 10:23:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.675 10:23:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:41.675 10:23:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.675 00:05:41.675 10:23:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:41.675 10:23:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.675 INFO: Checking if target configuration is the same... 00:05:41.675 10:23:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:41.675 10:23:13 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.675 10:23:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.675 + '[' 2 -ne 2 ']' 00:05:41.675 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.675 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.675 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.675 +++ basename /dev/fd/62 00:05:41.675 ++ mktemp /tmp/62.XXX 00:05:41.675 + tmp_file_1=/tmp/62.SuY 00:05:41.675 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.675 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.675 + tmp_file_2=/tmp/spdk_tgt_config.json.Qf2 00:05:41.675 + ret=0 00:05:41.675 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.936 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.936 + diff -u /tmp/62.SuY /tmp/spdk_tgt_config.json.Qf2 00:05:41.936 + echo 'INFO: JSON config files are the same' 00:05:41.936 INFO: JSON config files are the same 00:05:41.936 + rm /tmp/62.SuY /tmp/spdk_tgt_config.json.Qf2 00:05:41.936 + exit 0 00:05:41.936 10:23:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:41.936 10:23:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.936 INFO: changing configuration and checking if this can be detected... 00:05:41.936 10:23:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.936 10:23:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.197 10:23:14 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.197 10:23:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:42.197 10:23:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.197 + '[' 2 -ne 2 ']' 00:05:42.197 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.197 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.197 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.197 +++ basename /dev/fd/62 00:05:42.197 ++ mktemp /tmp/62.XXX 00:05:42.197 + tmp_file_1=/tmp/62.Xtu 00:05:42.197 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.197 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.197 + tmp_file_2=/tmp/spdk_tgt_config.json.1Gc 00:05:42.197 + ret=0 00:05:42.197 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.457 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.718 + diff -u /tmp/62.Xtu /tmp/spdk_tgt_config.json.1Gc 00:05:42.718 + ret=1 00:05:42.718 + echo '=== Start of file: /tmp/62.Xtu ===' 00:05:42.718 + cat /tmp/62.Xtu 00:05:42.718 + echo '=== End of file: /tmp/62.Xtu ===' 00:05:42.718 + echo '' 00:05:42.718 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1Gc ===' 00:05:42.718 + cat /tmp/spdk_tgt_config.json.1Gc 00:05:42.718 + echo '=== End of file: /tmp/spdk_tgt_config.json.1Gc ===' 00:05:42.718 + echo '' 00:05:42.718 + rm /tmp/62.Xtu /tmp/spdk_tgt_config.json.1Gc 00:05:42.718 + exit 1 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:42.718 INFO: configuration change detected. 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 1817115 ]] 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.718 10:23:14 json_config -- json_config/json_config.sh@330 -- # killprocess 1817115 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@954 -- # '[' -z 1817115 ']' 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@958 -- # kill -0 1817115 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@959 -- # uname 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1817115 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1817115' 00:05:42.718 killing process with pid 1817115 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@973 -- # kill 1817115 00:05:42.718 10:23:14 json_config -- common/autotest_common.sh@978 -- # wait 1817115 00:05:42.980 10:23:15 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.980 10:23:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:42.980 10:23:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.980 10:23:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 10:23:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:42.980 10:23:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:42.980 INFO: Success 00:05:42.980 00:05:42.980 real 0m7.460s 00:05:42.980 user 0m9.120s 00:05:42.980 sys 0m1.903s 00:05:42.980 10:23:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.980 10:23:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 ************************************ 00:05:42.980 END TEST json_config 00:05:42.980 ************************************ 00:05:42.980 10:23:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.980 10:23:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.980 10:23:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.980 10:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.242 ************************************ 00:05:43.242 START TEST json_config_extra_key 00:05:43.242 ************************************ 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.242 10:23:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.242 --rc genhtml_branch_coverage=1 00:05:43.242 --rc genhtml_function_coverage=1 00:05:43.242 --rc genhtml_legend=1 00:05:43.242 --rc geninfo_all_blocks=1 00:05:43.242 --rc geninfo_unexecuted_blocks=1 00:05:43.242 00:05:43.242 ' 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.242 --rc genhtml_branch_coverage=1 00:05:43.242 --rc genhtml_function_coverage=1 00:05:43.242 --rc genhtml_legend=1 00:05:43.242 --rc geninfo_all_blocks=1 00:05:43.242 --rc geninfo_unexecuted_blocks=1 00:05:43.242 00:05:43.242 ' 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.242 --rc genhtml_branch_coverage=1 00:05:43.242 --rc genhtml_function_coverage=1 00:05:43.242 --rc genhtml_legend=1 00:05:43.242 --rc geninfo_all_blocks=1 00:05:43.242 --rc geninfo_unexecuted_blocks=1 00:05:43.242 00:05:43.242 ' 00:05:43.242 10:23:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.243 --rc genhtml_branch_coverage=1 00:05:43.243 --rc genhtml_function_coverage=1 00:05:43.243 --rc genhtml_legend=1 00:05:43.243 --rc geninfo_all_blocks=1 00:05:43.243 --rc geninfo_unexecuted_blocks=1 00:05:43.243 00:05:43.243 ' 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.243 10:23:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.243 10:23:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.243 10:23:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.243 10:23:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.243 10:23:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.243 10:23:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.243 10:23:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.243 10:23:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.243 10:23:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.243 10:23:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.243 INFO: launching applications... 00:05:43.243 10:23:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1817821 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.243 Waiting for target to run... 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1817821 /var/tmp/spdk_tgt.sock 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1817821 ']' 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.243 10:23:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.243 10:23:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.504 [2024-11-20 10:23:15.653764] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:43.504 [2024-11-20 10:23:15.653839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817821 ] 00:05:43.764 [2024-11-20 10:23:15.941720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.764 [2024-11-20 10:23:15.967045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.335 10:23:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.335 10:23:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.335 00:05:44.335 10:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.335 INFO: shutting down applications... 00:05:44.335 10:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1817821 ]] 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1817821 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1817821 00:05:44.335 10:23:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1817821 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.595 10:23:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.595 SPDK target shutdown done 00:05:44.595 10:23:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.595 Success 00:05:44.595 00:05:44.595 real 0m1.571s 00:05:44.596 user 0m1.175s 00:05:44.596 sys 0m0.419s 00:05:44.596 10:23:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.596 10:23:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.596 ************************************ 00:05:44.596 END TEST json_config_extra_key 00:05:44.596 ************************************ 00:05:44.856 10:23:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.856 10:23:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.856 10:23:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.856 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:05:44.856 ************************************ 00:05:44.856 START TEST alias_rpc 00:05:44.856 ************************************ 00:05:44.856 10:23:17 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.856 * Looking for test storage... 00:05:44.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.856 10:23:17 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.856 10:23:17 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.857 10:23:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.857 --rc genhtml_branch_coverage=1 00:05:44.857 --rc genhtml_function_coverage=1 00:05:44.857 --rc genhtml_legend=1 00:05:44.857 --rc geninfo_all_blocks=1 00:05:44.857 --rc geninfo_unexecuted_blocks=1 00:05:44.857 00:05:44.857 ' 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.857 --rc genhtml_branch_coverage=1 00:05:44.857 --rc genhtml_function_coverage=1 00:05:44.857 --rc genhtml_legend=1 00:05:44.857 --rc geninfo_all_blocks=1 00:05:44.857 --rc geninfo_unexecuted_blocks=1 00:05:44.857 00:05:44.857 ' 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.857 --rc genhtml_branch_coverage=1 00:05:44.857 --rc genhtml_function_coverage=1 00:05:44.857 --rc genhtml_legend=1 00:05:44.857 --rc geninfo_all_blocks=1 00:05:44.857 --rc geninfo_unexecuted_blocks=1 00:05:44.857 00:05:44.857 ' 00:05:44.857 10:23:17 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.857 --rc genhtml_branch_coverage=1 00:05:44.857 --rc genhtml_function_coverage=1 00:05:44.857 --rc genhtml_legend=1 00:05:44.857 --rc geninfo_all_blocks=1 00:05:44.857 --rc geninfo_unexecuted_blocks=1 00:05:44.857 00:05:44.857 ' 00:05:44.857 10:23:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.118 10:23:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1818176 00:05:45.118 10:23:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1818176 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1818176 ']' 00:05:45.118 10:23:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.118 10:23:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.118 [2024-11-20 10:23:17.289997] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:45.118 [2024-11-20 10:23:17.290073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818176 ] 00:05:45.118 [2024-11-20 10:23:17.375789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.118 [2024-11-20 10:23:17.411055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.058 10:23:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.058 10:23:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1818176 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1818176 ']' 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1818176 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818176 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818176' 00:05:46.058 killing process with pid 1818176 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 1818176 00:05:46.058 10:23:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 1818176 00:05:46.319 00:05:46.319 real 0m1.477s 00:05:46.319 user 0m1.628s 00:05:46.319 sys 0m0.401s 00:05:46.319 10:23:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.319 10:23:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 ************************************ 00:05:46.319 END TEST alias_rpc 00:05:46.319 ************************************ 00:05:46.319 10:23:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:46.319 10:23:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.319 10:23:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.319 10:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.319 10:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:46.319 ************************************ 00:05:46.319 START TEST spdkcli_tcp 00:05:46.319 ************************************ 00:05:46.319 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.319 * Looking for test storage... 00:05:46.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.319 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.319 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.319 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.581 10:23:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.581 --rc genhtml_branch_coverage=1 00:05:46.581 --rc genhtml_function_coverage=1 00:05:46.581 --rc genhtml_legend=1 00:05:46.581 --rc geninfo_all_blocks=1 00:05:46.581 --rc geninfo_unexecuted_blocks=1 00:05:46.581 00:05:46.581 ' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.581 --rc genhtml_branch_coverage=1 00:05:46.581 --rc genhtml_function_coverage=1 00:05:46.581 --rc genhtml_legend=1 00:05:46.581 --rc geninfo_all_blocks=1 00:05:46.581 --rc geninfo_unexecuted_blocks=1 00:05:46.581 00:05:46.581 ' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.581 --rc genhtml_branch_coverage=1 00:05:46.581 --rc genhtml_function_coverage=1 00:05:46.581 --rc genhtml_legend=1 00:05:46.581 --rc geninfo_all_blocks=1 00:05:46.581 --rc geninfo_unexecuted_blocks=1 00:05:46.581 00:05:46.581 ' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.581 --rc genhtml_branch_coverage=1 00:05:46.581 --rc genhtml_function_coverage=1 00:05:46.581 --rc genhtml_legend=1 00:05:46.581 --rc geninfo_all_blocks=1 00:05:46.581 --rc geninfo_unexecuted_blocks=1 00:05:46.581 00:05:46.581 ' 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1818513 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1818513 00:05:46.581 10:23:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1818513 ']' 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.581 10:23:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.581 [2024-11-20 10:23:18.852943] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:46.582 [2024-11-20 10:23:18.853022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818513 ] 00:05:46.582 [2024-11-20 10:23:18.941230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.842 [2024-11-20 10:23:18.977742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.842 [2024-11-20 10:23:18.977743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.412 10:23:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.413 10:23:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:47.413 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1818708 00:05:47.413 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.413 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.674 [ 00:05:47.674 "bdev_malloc_delete", 00:05:47.674 "bdev_malloc_create", 00:05:47.674 "bdev_null_resize", 00:05:47.674 "bdev_null_delete", 00:05:47.674 "bdev_null_create", 00:05:47.674 "bdev_nvme_cuse_unregister", 00:05:47.674 "bdev_nvme_cuse_register", 00:05:47.674 "bdev_opal_new_user", 00:05:47.674 "bdev_opal_set_lock_state", 00:05:47.674 "bdev_opal_delete", 00:05:47.674 "bdev_opal_get_info", 00:05:47.674 "bdev_opal_create", 00:05:47.674 "bdev_nvme_opal_revert", 00:05:47.674 "bdev_nvme_opal_init", 00:05:47.674 "bdev_nvme_send_cmd", 00:05:47.674 "bdev_nvme_set_keys", 00:05:47.674 "bdev_nvme_get_path_iostat", 00:05:47.674 "bdev_nvme_get_mdns_discovery_info", 00:05:47.674 "bdev_nvme_stop_mdns_discovery", 00:05:47.674 "bdev_nvme_start_mdns_discovery", 00:05:47.674 "bdev_nvme_set_multipath_policy", 00:05:47.674 "bdev_nvme_set_preferred_path", 00:05:47.674 "bdev_nvme_get_io_paths", 00:05:47.674 "bdev_nvme_remove_error_injection", 00:05:47.674 "bdev_nvme_add_error_injection", 00:05:47.674 "bdev_nvme_get_discovery_info", 00:05:47.674 "bdev_nvme_stop_discovery", 00:05:47.674 "bdev_nvme_start_discovery", 00:05:47.674 "bdev_nvme_get_controller_health_info", 00:05:47.674 "bdev_nvme_disable_controller", 00:05:47.674 "bdev_nvme_enable_controller", 00:05:47.674 "bdev_nvme_reset_controller", 00:05:47.674 "bdev_nvme_get_transport_statistics", 00:05:47.674 "bdev_nvme_apply_firmware", 00:05:47.674 "bdev_nvme_detach_controller", 00:05:47.674 "bdev_nvme_get_controllers", 00:05:47.674 "bdev_nvme_attach_controller", 00:05:47.674 "bdev_nvme_set_hotplug", 00:05:47.674 "bdev_nvme_set_options", 00:05:47.674 "bdev_passthru_delete", 00:05:47.674 "bdev_passthru_create", 00:05:47.674 "bdev_lvol_set_parent_bdev", 00:05:47.674 "bdev_lvol_set_parent", 00:05:47.674 "bdev_lvol_check_shallow_copy", 00:05:47.674 "bdev_lvol_start_shallow_copy", 00:05:47.674 "bdev_lvol_grow_lvstore", 00:05:47.674 "bdev_lvol_get_lvols", 00:05:47.674 "bdev_lvol_get_lvstores", 00:05:47.674 "bdev_lvol_delete", 00:05:47.674 "bdev_lvol_set_read_only", 00:05:47.674 "bdev_lvol_resize", 00:05:47.674 "bdev_lvol_decouple_parent", 00:05:47.674 "bdev_lvol_inflate", 00:05:47.674 "bdev_lvol_rename", 00:05:47.674 "bdev_lvol_clone_bdev", 00:05:47.674 "bdev_lvol_clone", 00:05:47.674 "bdev_lvol_snapshot", 00:05:47.674 "bdev_lvol_create", 00:05:47.674 "bdev_lvol_delete_lvstore", 00:05:47.674 "bdev_lvol_rename_lvstore", 00:05:47.674 "bdev_lvol_create_lvstore", 00:05:47.674 "bdev_raid_set_options", 00:05:47.674 "bdev_raid_remove_base_bdev", 00:05:47.674 "bdev_raid_add_base_bdev", 00:05:47.674 "bdev_raid_delete", 00:05:47.674 "bdev_raid_create", 00:05:47.674 "bdev_raid_get_bdevs", 00:05:47.674 "bdev_error_inject_error", 00:05:47.674 "bdev_error_delete", 00:05:47.674 "bdev_error_create", 00:05:47.674 "bdev_split_delete", 00:05:47.674 "bdev_split_create", 00:05:47.674 "bdev_delay_delete", 00:05:47.674 "bdev_delay_create", 00:05:47.674 "bdev_delay_update_latency", 00:05:47.674 "bdev_zone_block_delete", 00:05:47.674 "bdev_zone_block_create", 00:05:47.675 "blobfs_create", 00:05:47.675 "blobfs_detect", 00:05:47.675 "blobfs_set_cache_size", 00:05:47.675 "bdev_aio_delete", 00:05:47.675 "bdev_aio_rescan", 00:05:47.675 "bdev_aio_create", 00:05:47.675 "bdev_ftl_set_property", 00:05:47.675 "bdev_ftl_get_properties", 00:05:47.675 "bdev_ftl_get_stats", 00:05:47.675 "bdev_ftl_unmap", 00:05:47.675 "bdev_ftl_unload", 00:05:47.675 "bdev_ftl_delete", 00:05:47.675 "bdev_ftl_load", 00:05:47.675 "bdev_ftl_create", 00:05:47.675 "bdev_virtio_attach_controller", 00:05:47.675 "bdev_virtio_scsi_get_devices", 00:05:47.675 "bdev_virtio_detach_controller", 00:05:47.675 "bdev_virtio_blk_set_hotplug", 00:05:47.675 "bdev_iscsi_delete", 00:05:47.675 "bdev_iscsi_create", 00:05:47.675 "bdev_iscsi_set_options", 00:05:47.675 "accel_error_inject_error", 00:05:47.675 "ioat_scan_accel_module", 00:05:47.675 "dsa_scan_accel_module", 00:05:47.675 "iaa_scan_accel_module", 00:05:47.675 "vfu_virtio_create_fs_endpoint", 00:05:47.675 "vfu_virtio_create_scsi_endpoint", 00:05:47.675 "vfu_virtio_scsi_remove_target", 00:05:47.675 "vfu_virtio_scsi_add_target", 00:05:47.675 "vfu_virtio_create_blk_endpoint", 00:05:47.675 "vfu_virtio_delete_endpoint", 00:05:47.675 "keyring_file_remove_key", 00:05:47.675 "keyring_file_add_key", 00:05:47.675 "keyring_linux_set_options", 00:05:47.675 "fsdev_aio_delete", 00:05:47.675 "fsdev_aio_create", 00:05:47.675 "iscsi_get_histogram", 00:05:47.675 "iscsi_enable_histogram", 00:05:47.675 "iscsi_set_options", 00:05:47.675 "iscsi_get_auth_groups", 00:05:47.675 "iscsi_auth_group_remove_secret", 00:05:47.675 "iscsi_auth_group_add_secret", 00:05:47.675 "iscsi_delete_auth_group", 00:05:47.675 "iscsi_create_auth_group", 00:05:47.675 "iscsi_set_discovery_auth", 00:05:47.675 "iscsi_get_options", 00:05:47.675 "iscsi_target_node_request_logout", 00:05:47.675 "iscsi_target_node_set_redirect", 00:05:47.675 "iscsi_target_node_set_auth", 00:05:47.675 "iscsi_target_node_add_lun", 00:05:47.675 "iscsi_get_stats", 00:05:47.675 "iscsi_get_connections", 00:05:47.675 "iscsi_portal_group_set_auth", 00:05:47.675 "iscsi_start_portal_group", 00:05:47.675 "iscsi_delete_portal_group", 00:05:47.675 "iscsi_create_portal_group", 00:05:47.675 "iscsi_get_portal_groups", 00:05:47.675 "iscsi_delete_target_node", 00:05:47.675 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.675 "iscsi_target_node_add_pg_ig_maps", 00:05:47.675 "iscsi_create_target_node", 00:05:47.675 "iscsi_get_target_nodes", 00:05:47.675 "iscsi_delete_initiator_group", 00:05:47.675 "iscsi_initiator_group_remove_initiators", 00:05:47.675 "iscsi_initiator_group_add_initiators", 00:05:47.675 "iscsi_create_initiator_group", 00:05:47.675 "iscsi_get_initiator_groups", 00:05:47.675 "nvmf_set_crdt", 00:05:47.675 "nvmf_set_config", 00:05:47.675 "nvmf_set_max_subsystems", 00:05:47.675 "nvmf_stop_mdns_prr", 00:05:47.675 "nvmf_publish_mdns_prr", 00:05:47.675 "nvmf_subsystem_get_listeners", 00:05:47.675 "nvmf_subsystem_get_qpairs", 00:05:47.675 "nvmf_subsystem_get_controllers", 00:05:47.675 "nvmf_get_stats", 00:05:47.675 "nvmf_get_transports", 00:05:47.675 "nvmf_create_transport", 00:05:47.675 "nvmf_get_targets", 00:05:47.675 "nvmf_delete_target", 00:05:47.675 "nvmf_create_target", 00:05:47.675 "nvmf_subsystem_allow_any_host", 00:05:47.675 "nvmf_subsystem_set_keys", 00:05:47.675 "nvmf_subsystem_remove_host", 00:05:47.675 "nvmf_subsystem_add_host", 00:05:47.675 "nvmf_ns_remove_host", 00:05:47.675 "nvmf_ns_add_host", 00:05:47.675 "nvmf_subsystem_remove_ns", 00:05:47.675 "nvmf_subsystem_set_ns_ana_group", 00:05:47.675 "nvmf_subsystem_add_ns", 00:05:47.675 "nvmf_subsystem_listener_set_ana_state", 00:05:47.675 "nvmf_discovery_get_referrals", 00:05:47.675 "nvmf_discovery_remove_referral", 00:05:47.675 "nvmf_discovery_add_referral", 00:05:47.675 "nvmf_subsystem_remove_listener", 00:05:47.675 "nvmf_subsystem_add_listener", 00:05:47.675 "nvmf_delete_subsystem", 00:05:47.675 "nvmf_create_subsystem", 00:05:47.675 "nvmf_get_subsystems", 00:05:47.675 "env_dpdk_get_mem_stats", 00:05:47.675 "nbd_get_disks", 00:05:47.675 "nbd_stop_disk", 00:05:47.675 "nbd_start_disk", 00:05:47.675 "ublk_recover_disk", 00:05:47.675 "ublk_get_disks", 00:05:47.675 "ublk_stop_disk", 00:05:47.675 "ublk_start_disk", 00:05:47.675 "ublk_destroy_target", 00:05:47.675 "ublk_create_target", 00:05:47.675 "virtio_blk_create_transport", 00:05:47.675 "virtio_blk_get_transports", 00:05:47.675 "vhost_controller_set_coalescing", 00:05:47.675 "vhost_get_controllers", 00:05:47.675 "vhost_delete_controller", 00:05:47.675 "vhost_create_blk_controller", 00:05:47.675 "vhost_scsi_controller_remove_target", 00:05:47.675 "vhost_scsi_controller_add_target", 00:05:47.675 "vhost_start_scsi_controller", 00:05:47.675 "vhost_create_scsi_controller", 00:05:47.675 "thread_set_cpumask", 00:05:47.675 "scheduler_set_options", 00:05:47.675 "framework_get_governor", 00:05:47.675 "framework_get_scheduler", 00:05:47.675 "framework_set_scheduler", 00:05:47.675 "framework_get_reactors", 00:05:47.675 "thread_get_io_channels", 00:05:47.675 "thread_get_pollers", 00:05:47.675 "thread_get_stats", 00:05:47.675 "framework_monitor_context_switch", 00:05:47.675 "spdk_kill_instance", 00:05:47.675 "log_enable_timestamps", 00:05:47.675 "log_get_flags", 00:05:47.675 "log_clear_flag", 00:05:47.675 "log_set_flag", 00:05:47.675 "log_get_level", 00:05:47.675 "log_set_level", 00:05:47.675 "log_get_print_level", 00:05:47.675 "log_set_print_level", 00:05:47.675 "framework_enable_cpumask_locks", 00:05:47.675 "framework_disable_cpumask_locks", 00:05:47.675 "framework_wait_init", 00:05:47.675 "framework_start_init", 00:05:47.675 "scsi_get_devices", 00:05:47.675 "bdev_get_histogram", 00:05:47.675 "bdev_enable_histogram", 00:05:47.675 "bdev_set_qos_limit", 00:05:47.675 "bdev_set_qd_sampling_period", 00:05:47.675 "bdev_get_bdevs", 00:05:47.675 "bdev_reset_iostat", 00:05:47.675 "bdev_get_iostat", 00:05:47.675 "bdev_examine", 00:05:47.675 "bdev_wait_for_examine", 00:05:47.675 "bdev_set_options", 00:05:47.675 "accel_get_stats", 00:05:47.675 "accel_set_options", 00:05:47.675 "accel_set_driver", 00:05:47.675 "accel_crypto_key_destroy", 00:05:47.675 "accel_crypto_keys_get", 00:05:47.675 "accel_crypto_key_create", 00:05:47.675 "accel_assign_opc", 00:05:47.675 "accel_get_module_info", 00:05:47.675 "accel_get_opc_assignments", 00:05:47.675 "vmd_rescan", 00:05:47.675 "vmd_remove_device", 00:05:47.675 "vmd_enable", 00:05:47.675 "sock_get_default_impl", 00:05:47.675 "sock_set_default_impl", 00:05:47.675 "sock_impl_set_options", 00:05:47.675 "sock_impl_get_options", 00:05:47.675 "iobuf_get_stats", 00:05:47.675 "iobuf_set_options", 00:05:47.675 "keyring_get_keys", 00:05:47.675 "vfu_tgt_set_base_path", 00:05:47.675 "framework_get_pci_devices", 00:05:47.675 "framework_get_config", 00:05:47.675 "framework_get_subsystems", 00:05:47.675 "fsdev_set_opts", 00:05:47.675 "fsdev_get_opts", 00:05:47.675 "trace_get_info", 00:05:47.675 "trace_get_tpoint_group_mask", 00:05:47.675 "trace_disable_tpoint_group", 00:05:47.675 "trace_enable_tpoint_group", 00:05:47.675 "trace_clear_tpoint_mask", 00:05:47.675 "trace_set_tpoint_mask", 00:05:47.675 "notify_get_notifications", 00:05:47.675 "notify_get_types", 00:05:47.675 "spdk_get_version", 00:05:47.675 "rpc_get_methods" 00:05:47.675 ] 00:05:47.675 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.675 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.675 10:23:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1818513 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1818513 ']' 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1818513 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818513 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818513' 00:05:47.675 killing process with pid 1818513 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1818513 00:05:47.675 10:23:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1818513 00:05:47.936 00:05:47.936 real 0m1.538s 00:05:47.936 user 0m2.794s 00:05:47.936 sys 0m0.474s 00:05:47.936 10:23:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.936 10:23:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.936 ************************************ 00:05:47.936 END TEST spdkcli_tcp 00:05:47.936 ************************************ 00:05:47.936 10:23:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.936 10:23:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.936 10:23:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.936 10:23:20 -- common/autotest_common.sh@10 -- # set +x 00:05:47.936 ************************************ 00:05:47.936 START TEST dpdk_mem_utility 00:05:47.936 ************************************ 00:05:47.936 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.936 * Looking for test storage... 00:05:47.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.936 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.936 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.936 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.197 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.197 10:23:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:48.197 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.197 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.197 --rc genhtml_branch_coverage=1 00:05:48.197 --rc genhtml_function_coverage=1 00:05:48.197 --rc genhtml_legend=1 00:05:48.197 --rc geninfo_all_blocks=1 00:05:48.198 --rc geninfo_unexecuted_blocks=1 00:05:48.198 00:05:48.198 ' 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.198 --rc genhtml_branch_coverage=1 00:05:48.198 --rc genhtml_function_coverage=1 00:05:48.198 --rc genhtml_legend=1 00:05:48.198 --rc geninfo_all_blocks=1 00:05:48.198 --rc geninfo_unexecuted_blocks=1 00:05:48.198 00:05:48.198 ' 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.198 --rc genhtml_branch_coverage=1 00:05:48.198 --rc genhtml_function_coverage=1 00:05:48.198 --rc genhtml_legend=1 00:05:48.198 --rc geninfo_all_blocks=1 00:05:48.198 --rc geninfo_unexecuted_blocks=1 00:05:48.198 00:05:48.198 ' 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.198 --rc genhtml_branch_coverage=1 00:05:48.198 --rc genhtml_function_coverage=1 00:05:48.198 --rc genhtml_legend=1 00:05:48.198 --rc geninfo_all_blocks=1 00:05:48.198 --rc geninfo_unexecuted_blocks=1 00:05:48.198 00:05:48.198 ' 00:05:48.198 10:23:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.198 10:23:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1818863 00:05:48.198 10:23:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1818863 00:05:48.198 10:23:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1818863 ']' 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.198 10:23:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.198 [2024-11-20 10:23:20.451261] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:48.198 [2024-11-20 10:23:20.451315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818863 ] 00:05:48.198 [2024-11-20 10:23:20.535817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.458 [2024-11-20 10:23:20.570397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.030 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.030 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:49.030 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.030 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.030 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.030 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.030 { 00:05:49.030 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.030 } 00:05:49.030 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.030 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.030 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:49.030 1 heaps totaling size 810.000000 MiB 00:05:49.030 size: 810.000000 MiB heap id: 0 00:05:49.030 end heaps---------- 00:05:49.030 9 mempools totaling size 595.772034 MiB 00:05:49.030 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.030 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.030 size: 92.545471 MiB name: bdev_io_1818863 00:05:49.030 size: 50.003479 MiB name: msgpool_1818863 00:05:49.030 size: 36.509338 MiB name: fsdev_io_1818863 00:05:49.030 size: 21.763794 MiB name: PDU_Pool 00:05:49.030 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.030 size: 4.133484 MiB name: evtpool_1818863 00:05:49.030 size: 0.026123 MiB name: Session_Pool 00:05:49.030 end mempools------- 00:05:49.030 6 memzones totaling size 4.142822 MiB 00:05:49.030 size: 1.000366 MiB name: RG_ring_0_1818863 00:05:49.030 size: 1.000366 MiB name: RG_ring_1_1818863 00:05:49.030 size: 1.000366 MiB name: RG_ring_4_1818863 00:05:49.030 size: 1.000366 MiB name: RG_ring_5_1818863 00:05:49.030 size: 0.125366 MiB name: RG_ring_2_1818863 00:05:49.030 size: 0.015991 MiB name: RG_ring_3_1818863 00:05:49.030 end memzones------- 00:05:49.030 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.030 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:49.030 list of free elements. size: 10.862488 MiB 00:05:49.030 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:49.030 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:49.030 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:49.030 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:49.030 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:49.030 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:49.030 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:49.030 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:49.030 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:49.030 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:49.030 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:49.030 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:49.030 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:49.030 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:49.030 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:49.030 list of standard malloc elements. size: 199.218628 MiB 00:05:49.030 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:49.030 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:49.030 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:49.030 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:49.030 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.030 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.030 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:49.030 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.030 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:49.030 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.030 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.030 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:49.030 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:49.031 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:49.031 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:49.031 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:49.031 list of memzone associated elements. size: 599.918884 MiB 00:05:49.031 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:49.031 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.031 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:49.031 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.031 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:49.031 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1818863_0 00:05:49.031 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:49.031 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1818863_0 00:05:49.031 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:49.031 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1818863_0 00:05:49.031 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:49.031 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.031 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:49.031 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.031 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:49.031 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1818863_0 00:05:49.031 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:49.031 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1818863 00:05:49.031 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.031 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1818863 00:05:49.031 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:49.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.031 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:49.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.031 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:49.031 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.031 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:49.031 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.031 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:49.031 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1818863 00:05:49.031 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:49.031 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1818863 00:05:49.031 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:49.031 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1818863 00:05:49.031 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:49.031 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1818863 00:05:49.031 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:49.031 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1818863 00:05:49.031 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:49.031 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1818863 00:05:49.031 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:49.031 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.031 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:49.031 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.031 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:49.031 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.031 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:49.031 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1818863 00:05:49.031 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:49.031 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1818863 00:05:49.031 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:49.031 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.031 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:49.031 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.031 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:49.031 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1818863 00:05:49.031 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:49.031 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.031 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:49.031 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1818863 00:05:49.031 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:49.031 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1818863 00:05:49.031 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:49.031 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1818863 00:05:49.031 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:49.031 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.031 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.031 10:23:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1818863 00:05:49.031 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1818863 ']' 00:05:49.031 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1818863 00:05:49.031 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:49.031 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.031 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818863 00:05:49.291 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.291 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.291 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818863' 00:05:49.291 killing process with pid 1818863 00:05:49.291 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1818863 00:05:49.291 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1818863 00:05:49.291 00:05:49.291 real 0m1.417s 00:05:49.291 user 0m1.500s 00:05:49.292 sys 0m0.416s 00:05:49.292 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.292 10:23:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.292 ************************************ 00:05:49.292 END TEST dpdk_mem_utility 00:05:49.292 ************************************ 00:05:49.292 10:23:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.292 10:23:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.292 10:23:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.292 10:23:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.552 ************************************ 00:05:49.552 START TEST event 00:05:49.552 ************************************ 00:05:49.552 10:23:21 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.552 * Looking for test storage... 00:05:49.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.552 10:23:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.552 10:23:21 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.552 10:23:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.552 10:23:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.552 10:23:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.553 10:23:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.553 10:23:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.553 10:23:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.553 10:23:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.553 10:23:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.553 10:23:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.553 10:23:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.553 10:23:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.553 10:23:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.553 10:23:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.553 10:23:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.553 10:23:21 event -- scripts/common.sh@345 -- # : 1 00:05:49.553 10:23:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.553 10:23:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.553 10:23:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.553 10:23:21 event -- scripts/common.sh@353 -- # local d=1 00:05:49.553 10:23:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.553 10:23:21 event -- scripts/common.sh@355 -- # echo 1 00:05:49.553 10:23:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.553 10:23:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.553 10:23:21 event -- scripts/common.sh@353 -- # local d=2 00:05:49.553 10:23:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.553 10:23:21 event -- scripts/common.sh@355 -- # echo 2 00:05:49.553 10:23:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.553 10:23:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.553 10:23:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.553 10:23:21 event -- scripts/common.sh@368 -- # return 0 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.553 --rc genhtml_branch_coverage=1 00:05:49.553 --rc genhtml_function_coverage=1 00:05:49.553 --rc genhtml_legend=1 00:05:49.553 --rc geninfo_all_blocks=1 00:05:49.553 --rc geninfo_unexecuted_blocks=1 00:05:49.553 00:05:49.553 ' 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.553 --rc genhtml_branch_coverage=1 00:05:49.553 --rc genhtml_function_coverage=1 00:05:49.553 --rc genhtml_legend=1 00:05:49.553 --rc geninfo_all_blocks=1 00:05:49.553 --rc geninfo_unexecuted_blocks=1 00:05:49.553 00:05:49.553 ' 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.553 --rc genhtml_branch_coverage=1 00:05:49.553 --rc genhtml_function_coverage=1 00:05:49.553 --rc genhtml_legend=1 00:05:49.553 --rc geninfo_all_blocks=1 00:05:49.553 --rc geninfo_unexecuted_blocks=1 00:05:49.553 00:05:49.553 ' 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.553 --rc genhtml_branch_coverage=1 00:05:49.553 --rc genhtml_function_coverage=1 00:05:49.553 --rc genhtml_legend=1 00:05:49.553 --rc geninfo_all_blocks=1 00:05:49.553 --rc geninfo_unexecuted_blocks=1 00:05:49.553 00:05:49.553 ' 00:05:49.553 10:23:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.553 10:23:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.553 10:23:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:49.553 10:23:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.553 10:23:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 ************************************ 00:05:49.814 START TEST event_perf 00:05:49.814 ************************************ 00:05:49.814 10:23:21 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.814 Running I/O for 1 seconds...[2024-11-20 10:23:21.954189] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:49.814 [2024-11-20 10:23:21.954305] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819207 ] 00:05:49.814 [2024-11-20 10:23:22.044498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.814 [2024-11-20 10:23:22.089539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.814 [2024-11-20 10:23:22.089694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.814 [2024-11-20 10:23:22.089847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.814 [2024-11-20 10:23:22.089848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.755 Running I/O for 1 seconds... 00:05:50.755 lcore 0: 176795 00:05:50.755 lcore 1: 176797 00:05:50.755 lcore 2: 176798 00:05:50.755 lcore 3: 176799 00:05:50.755 done. 00:05:50.755 00:05:50.755 real 0m1.186s 00:05:50.755 user 0m4.102s 00:05:50.755 sys 0m0.080s 00:05:50.755 10:23:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.755 10:23:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.755 ************************************ 00:05:50.755 END TEST event_perf 00:05:50.755 ************************************ 00:05:51.013 10:23:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.013 10:23:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:51.013 10:23:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.013 10:23:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.013 ************************************ 00:05:51.013 START TEST event_reactor 00:05:51.013 ************************************ 00:05:51.013 10:23:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.013 [2024-11-20 10:23:23.214602] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:51.013 [2024-11-20 10:23:23.214706] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819548 ] 00:05:51.013 [2024-11-20 10:23:23.302070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.013 [2024-11-20 10:23:23.337246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.396 test_start 00:05:52.396 oneshot 00:05:52.396 tick 100 00:05:52.396 tick 100 00:05:52.396 tick 250 00:05:52.396 tick 100 00:05:52.396 tick 100 00:05:52.396 tick 100 00:05:52.396 tick 250 00:05:52.396 tick 500 00:05:52.396 tick 100 00:05:52.396 tick 100 00:05:52.396 tick 250 00:05:52.396 tick 100 00:05:52.396 tick 100 00:05:52.396 test_end 00:05:52.396 00:05:52.396 real 0m1.172s 00:05:52.396 user 0m1.084s 00:05:52.396 sys 0m0.084s 00:05:52.396 10:23:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.396 10:23:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.396 ************************************ 00:05:52.396 END TEST event_reactor 00:05:52.396 ************************************ 00:05:52.396 10:23:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.396 10:23:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:52.396 10:23:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.396 10:23:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.396 ************************************ 00:05:52.396 START TEST event_reactor_perf 00:05:52.396 ************************************ 00:05:52.396 10:23:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.396 [2024-11-20 10:23:24.463901] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:52.396 [2024-11-20 10:23:24.463998] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819898 ] 00:05:52.396 [2024-11-20 10:23:24.552347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.396 [2024-11-20 10:23:24.590316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.337 test_start 00:05:53.337 test_end 00:05:53.337 Performance: 532043 events per second 00:05:53.337 00:05:53.337 real 0m1.174s 00:05:53.337 user 0m1.089s 00:05:53.337 sys 0m0.082s 00:05:53.337 10:23:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.337 10:23:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.337 ************************************ 00:05:53.337 END TEST event_reactor_perf 00:05:53.337 ************************************ 00:05:53.337 10:23:25 event -- event/event.sh@49 -- # uname -s 00:05:53.337 10:23:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.337 10:23:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.337 10:23:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.337 10:23:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.337 10:23:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 ************************************ 00:05:53.338 START TEST event_scheduler 00:05:53.338 ************************************ 00:05:53.338 10:23:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.597 * Looking for test storage... 00:05:53.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.597 10:23:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.597 --rc genhtml_branch_coverage=1 00:05:53.597 --rc genhtml_function_coverage=1 00:05:53.597 --rc genhtml_legend=1 00:05:53.597 --rc geninfo_all_blocks=1 00:05:53.597 --rc geninfo_unexecuted_blocks=1 00:05:53.597 00:05:53.597 ' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.597 --rc genhtml_branch_coverage=1 00:05:53.597 --rc genhtml_function_coverage=1 00:05:53.597 --rc genhtml_legend=1 00:05:53.597 --rc geninfo_all_blocks=1 00:05:53.597 --rc geninfo_unexecuted_blocks=1 00:05:53.597 00:05:53.597 ' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.597 --rc genhtml_branch_coverage=1 00:05:53.597 --rc genhtml_function_coverage=1 00:05:53.597 --rc genhtml_legend=1 00:05:53.597 --rc geninfo_all_blocks=1 00:05:53.597 --rc geninfo_unexecuted_blocks=1 00:05:53.597 00:05:53.597 ' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.597 --rc genhtml_branch_coverage=1 00:05:53.597 --rc genhtml_function_coverage=1 00:05:53.597 --rc genhtml_legend=1 00:05:53.597 --rc geninfo_all_blocks=1 00:05:53.597 --rc geninfo_unexecuted_blocks=1 00:05:53.597 00:05:53.597 ' 00:05:53.597 10:23:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.597 10:23:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1820254 00:05:53.597 10:23:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.597 10:23:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1820254 00:05:53.597 10:23:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1820254 ']' 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.597 10:23:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.597 [2024-11-20 10:23:25.951541] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:53.597 [2024-11-20 10:23:25.951595] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820254 ] 00:05:53.858 [2024-11-20 10:23:26.041244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.858 [2024-11-20 10:23:26.089817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.858 [2024-11-20 10:23:26.089976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.858 [2024-11-20 10:23:26.090132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.858 [2024-11-20 10:23:26.090133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:54.433 10:23:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.433 [2024-11-20 10:23:26.760564] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:54.433 [2024-11-20 10:23:26.760584] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.433 [2024-11-20 10:23:26.760594] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.433 [2024-11-20 10:23:26.760600] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.433 [2024-11-20 10:23:26.760606] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.433 10:23:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.433 10:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 [2024-11-20 10:23:26.822709] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.692 10:23:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.692 10:23:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.692 10:23:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 ************************************ 00:05:54.692 START TEST scheduler_create_thread 00:05:54.692 ************************************ 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 2 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 3 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 4 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 5 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 6 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 7 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 8 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.692 9 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.692 10:23:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.262 10 00:05:55.262 10:23:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.262 10:23:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.262 10:23:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.262 10:23:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.642 10:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.642 10:23:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.642 10:23:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.642 10:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.642 10:23:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.212 10:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.212 10:23:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.212 10:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.212 10:23:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.183 10:23:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.183 10:23:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.183 10:23:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.183 10:23:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.183 10:23:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.755 10:23:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.755 00:05:58.755 real 0m4.225s 00:05:58.755 user 0m0.028s 00:05:58.755 sys 0m0.004s 00:05:58.755 10:23:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.755 10:23:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.755 ************************************ 00:05:58.755 END TEST scheduler_create_thread 00:05:58.755 ************************************ 00:05:58.755 10:23:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.755 10:23:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1820254 00:05:58.755 10:23:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1820254 ']' 00:05:58.755 10:23:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1820254 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1820254 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1820254' 00:05:59.015 killing process with pid 1820254 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1820254 00:05:59.015 10:23:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1820254 00:05:59.015 [2024-11-20 10:23:31.368472] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.276 00:05:59.276 real 0m5.824s 00:05:59.276 user 0m12.887s 00:05:59.276 sys 0m0.406s 00:05:59.276 10:23:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.276 10:23:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.276 ************************************ 00:05:59.276 END TEST event_scheduler 00:05:59.276 ************************************ 00:05:59.276 10:23:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.276 10:23:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.276 10:23:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.276 10:23:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.276 10:23:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.276 ************************************ 00:05:59.276 START TEST app_repeat 00:05:59.276 ************************************ 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1821370 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1821370' 00:05:59.276 Process app_repeat pid: 1821370 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.276 spdk_app_start Round 0 00:05:59.276 10:23:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1821370 /var/tmp/spdk-nbd.sock 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1821370 ']' 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.276 10:23:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.276 [2024-11-20 10:23:31.643667] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:05:59.276 [2024-11-20 10:23:31.643734] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821370 ] 00:05:59.537 [2024-11-20 10:23:31.728599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.537 [2024-11-20 10:23:31.760534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.537 [2024-11-20 10:23:31.760622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.537 10:23:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.537 10:23:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.537 10:23:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.797 Malloc0 00:05:59.798 10:23:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.058 Malloc1 00:06:00.058 10:23:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.058 10:23:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.058 /dev/nbd0 00:06:00.319 10:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.319 10:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.319 1+0 records in 00:06:00.319 1+0 records out 00:06:00.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295731 s, 13.9 MB/s 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.319 10:23:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.320 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.320 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.320 10:23:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.320 /dev/nbd1 00:06:00.320 10:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.320 10:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.320 10:23:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.581 1+0 records in 00:06:00.581 1+0 records out 00:06:00.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274931 s, 14.9 MB/s 00:06:00.581 10:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.581 10:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.581 10:23:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.581 10:23:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.581 10:23:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.581 { 00:06:00.581 "nbd_device": "/dev/nbd0", 00:06:00.581 "bdev_name": "Malloc0" 00:06:00.581 }, 00:06:00.581 { 00:06:00.581 "nbd_device": "/dev/nbd1", 00:06:00.581 "bdev_name": "Malloc1" 00:06:00.581 } 00:06:00.581 ]' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.581 { 00:06:00.581 "nbd_device": "/dev/nbd0", 00:06:00.581 "bdev_name": "Malloc0" 00:06:00.581 }, 00:06:00.581 { 00:06:00.581 "nbd_device": "/dev/nbd1", 00:06:00.581 "bdev_name": "Malloc1" 00:06:00.581 } 00:06:00.581 ]' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.581 /dev/nbd1' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.581 /dev/nbd1' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.581 10:23:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.842 256+0 records in 00:06:00.842 256+0 records out 00:06:00.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127329 s, 82.4 MB/s 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.842 256+0 records in 00:06:00.842 256+0 records out 00:06:00.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119251 s, 87.9 MB/s 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.842 256+0 records in 00:06:00.842 256+0 records out 00:06:00.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126255 s, 83.1 MB/s 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.842 10:23:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.843 10:23:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.843 10:23:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.843 10:23:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.843 10:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.104 10:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.365 10:23:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.365 10:23:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.627 10:23:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.627 [2024-11-20 10:23:33.926987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.627 [2024-11-20 10:23:33.956515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.627 [2024-11-20 10:23:33.956516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.627 [2024-11-20 10:23:33.985481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.627 [2024-11-20 10:23:33.985512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.929 10:23:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.929 10:23:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.929 spdk_app_start Round 1 00:06:04.929 10:23:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1821370 /var/tmp/spdk-nbd.sock 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1821370 ']' 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.929 10:23:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.929 10:23:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.929 10:23:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.929 10:23:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.929 Malloc0 00:06:04.929 10:23:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.191 Malloc1 00:06:05.191 10:23:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.191 10:23:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.452 /dev/nbd0 00:06:05.452 10:23:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.452 10:23:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.452 1+0 records in 00:06:05.452 1+0 records out 00:06:05.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381698 s, 10.7 MB/s 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.452 10:23:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.453 10:23:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.453 10:23:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.453 10:23:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.453 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.453 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.453 10:23:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.714 /dev/nbd1 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.714 1+0 records in 00:06:05.714 1+0 records out 00:06:05.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273531 s, 15.0 MB/s 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.714 10:23:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.714 10:23:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.714 10:23:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.714 { 00:06:05.714 "nbd_device": "/dev/nbd0", 00:06:05.714 "bdev_name": "Malloc0" 00:06:05.714 }, 00:06:05.714 { 00:06:05.714 "nbd_device": "/dev/nbd1", 00:06:05.714 "bdev_name": "Malloc1" 00:06:05.714 } 00:06:05.714 ]' 00:06:05.714 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.714 { 00:06:05.714 "nbd_device": "/dev/nbd0", 00:06:05.714 "bdev_name": "Malloc0" 00:06:05.714 }, 00:06:05.714 { 00:06:05.714 "nbd_device": "/dev/nbd1", 00:06:05.714 "bdev_name": "Malloc1" 00:06:05.714 } 00:06:05.714 ]' 00:06:05.714 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.975 /dev/nbd1' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.975 /dev/nbd1' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.975 256+0 records in 00:06:05.975 256+0 records out 00:06:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117781 s, 89.0 MB/s 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.975 256+0 records in 00:06:05.975 256+0 records out 00:06:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122667 s, 85.5 MB/s 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.975 256+0 records in 00:06:05.975 256+0 records out 00:06:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129221 s, 81.1 MB/s 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.975 10:23:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.236 10:23:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.497 10:23:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.497 10:23:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.758 10:23:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.758 [2024-11-20 10:23:39.102947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.022 [2024-11-20 10:23:39.132062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.022 [2024-11-20 10:23:39.132063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.022 [2024-11-20 10:23:39.161715] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.022 [2024-11-20 10:23:39.161746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.327 10:23:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.327 10:23:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.327 spdk_app_start Round 2 00:06:10.327 10:23:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1821370 /var/tmp/spdk-nbd.sock 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1821370 ']' 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.327 10:23:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.328 10:23:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.328 10:23:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.328 Malloc0 00:06:10.328 10:23:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.328 Malloc1 00:06:10.328 10:23:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.328 10:23:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.588 /dev/nbd0 00:06:10.588 10:23:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.588 10:23:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.588 10:23:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.588 10:23:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.589 1+0 records in 00:06:10.589 1+0 records out 00:06:10.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291564 s, 14.0 MB/s 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.589 10:23:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.589 10:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.589 10:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.589 10:23:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.850 /dev/nbd1 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.850 1+0 records in 00:06:10.850 1+0 records out 00:06:10.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266831 s, 15.4 MB/s 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.850 10:23:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.850 10:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.111 { 00:06:11.111 "nbd_device": "/dev/nbd0", 00:06:11.111 "bdev_name": "Malloc0" 00:06:11.111 }, 00:06:11.111 { 00:06:11.111 "nbd_device": "/dev/nbd1", 00:06:11.111 "bdev_name": "Malloc1" 00:06:11.111 } 00:06:11.111 ]' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.111 { 00:06:11.111 "nbd_device": "/dev/nbd0", 00:06:11.111 "bdev_name": "Malloc0" 00:06:11.111 }, 00:06:11.111 { 00:06:11.111 "nbd_device": "/dev/nbd1", 00:06:11.111 "bdev_name": "Malloc1" 00:06:11.111 } 00:06:11.111 ]' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.111 /dev/nbd1' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.111 /dev/nbd1' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.111 256+0 records in 00:06:11.111 256+0 records out 00:06:11.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127402 s, 82.3 MB/s 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.111 256+0 records in 00:06:11.111 256+0 records out 00:06:11.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119855 s, 87.5 MB/s 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.111 10:23:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.111 256+0 records in 00:06:11.112 256+0 records out 00:06:11.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012709 s, 82.5 MB/s 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.112 10:23:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.373 10:23:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.633 10:23:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.633 10:23:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.894 10:23:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.154 [2024-11-20 10:23:44.269534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.154 [2024-11-20 10:23:44.299206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.154 [2024-11-20 10:23:44.299206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.154 [2024-11-20 10:23:44.328252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.154 [2024-11-20 10:23:44.328283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.457 10:23:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1821370 /var/tmp/spdk-nbd.sock 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1821370 ']' 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.457 10:23:47 event.app_repeat -- event/event.sh@39 -- # killprocess 1821370 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1821370 ']' 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1821370 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821370 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821370' 00:06:15.457 killing process with pid 1821370 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1821370 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1821370 00:06:15.457 spdk_app_start is called in Round 0. 00:06:15.457 Shutdown signal received, stop current app iteration 00:06:15.457 Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 reinitialization... 00:06:15.457 spdk_app_start is called in Round 1. 00:06:15.457 Shutdown signal received, stop current app iteration 00:06:15.457 Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 reinitialization... 00:06:15.457 spdk_app_start is called in Round 2. 00:06:15.457 Shutdown signal received, stop current app iteration 00:06:15.457 Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 reinitialization... 00:06:15.457 spdk_app_start is called in Round 3. 00:06:15.457 Shutdown signal received, stop current app iteration 00:06:15.457 10:23:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.457 10:23:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.457 00:06:15.457 real 0m15.913s 00:06:15.457 user 0m35.032s 00:06:15.457 sys 0m2.277s 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.457 10:23:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.457 ************************************ 00:06:15.457 END TEST app_repeat 00:06:15.457 ************************************ 00:06:15.457 10:23:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.457 10:23:47 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.457 10:23:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.457 10:23:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.457 10:23:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.457 ************************************ 00:06:15.457 START TEST cpu_locks 00:06:15.457 ************************************ 00:06:15.457 10:23:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.457 * Looking for test storage... 00:06:15.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.457 10:23:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.457 10:23:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.457 10:23:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.457 10:23:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.457 10:23:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.458 10:23:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.458 --rc genhtml_branch_coverage=1 00:06:15.458 --rc genhtml_function_coverage=1 00:06:15.458 --rc genhtml_legend=1 00:06:15.458 --rc geninfo_all_blocks=1 00:06:15.458 --rc geninfo_unexecuted_blocks=1 00:06:15.458 00:06:15.458 ' 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.458 --rc genhtml_branch_coverage=1 00:06:15.458 --rc genhtml_function_coverage=1 00:06:15.458 --rc genhtml_legend=1 00:06:15.458 --rc geninfo_all_blocks=1 00:06:15.458 --rc geninfo_unexecuted_blocks=1 00:06:15.458 00:06:15.458 ' 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.458 --rc genhtml_branch_coverage=1 00:06:15.458 --rc genhtml_function_coverage=1 00:06:15.458 --rc genhtml_legend=1 00:06:15.458 --rc geninfo_all_blocks=1 00:06:15.458 --rc geninfo_unexecuted_blocks=1 00:06:15.458 00:06:15.458 ' 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.458 --rc genhtml_branch_coverage=1 00:06:15.458 --rc genhtml_function_coverage=1 00:06:15.458 --rc genhtml_legend=1 00:06:15.458 --rc geninfo_all_blocks=1 00:06:15.458 --rc geninfo_unexecuted_blocks=1 00:06:15.458 00:06:15.458 ' 00:06:15.458 10:23:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.458 10:23:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.458 10:23:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.458 10:23:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.458 10:23:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.719 ************************************ 00:06:15.719 START TEST default_locks 00:06:15.719 ************************************ 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1824887 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1824887 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1824887 ']' 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.719 10:23:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.719 [2024-11-20 10:23:47.906304] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:15.719 [2024-11-20 10:23:47.906371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824887 ] 00:06:15.719 [2024-11-20 10:23:47.993963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.719 [2024-11-20 10:23:48.028802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.661 lslocks: write error 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1824887 ']' 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1824887' 00:06:16.661 killing process with pid 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1824887 00:06:16.661 10:23:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1824887 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1824887 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1824887 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1824887 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1824887 ']' 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.926 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1824887) - No such process 00:06:16.927 ERROR: process (pid: 1824887) is no longer running 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.927 00:06:16.927 real 0m1.342s 00:06:16.927 user 0m1.452s 00:06:16.927 sys 0m0.453s 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.927 10:23:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.927 ************************************ 00:06:16.927 END TEST default_locks 00:06:16.927 ************************************ 00:06:16.927 10:23:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.927 10:23:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.927 10:23:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.927 10:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.927 ************************************ 00:06:16.927 START TEST default_locks_via_rpc 00:06:16.927 ************************************ 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1825103 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1825103 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1825103 ']' 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.927 10:23:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.189 [2024-11-20 10:23:49.316244] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:17.189 [2024-11-20 10:23:49.316298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825103 ] 00:06:17.189 [2024-11-20 10:23:49.402238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.189 [2024-11-20 10:23:49.435720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1825103 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.760 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1825103 ']' 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825103' 00:06:18.333 killing process with pid 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1825103 00:06:18.333 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1825103 00:06:18.594 00:06:18.594 real 0m1.579s 00:06:18.594 user 0m1.701s 00:06:18.594 sys 0m0.547s 00:06:18.594 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.594 10:23:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.594 ************************************ 00:06:18.595 END TEST default_locks_via_rpc 00:06:18.595 ************************************ 00:06:18.595 10:23:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.595 10:23:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.595 10:23:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.595 10:23:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.595 ************************************ 00:06:18.595 START TEST non_locking_app_on_locked_coremask 00:06:18.595 ************************************ 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1825449 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1825449 /var/tmp/spdk.sock 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1825449 ']' 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.595 10:23:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.856 [2024-11-20 10:23:50.972294] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:18.856 [2024-11-20 10:23:50.972350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825449 ] 00:06:18.856 [2024-11-20 10:23:51.059207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.856 [2024-11-20 10:23:51.092080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1825709 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1825709 /var/tmp/spdk2.sock 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1825709 ']' 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.427 10:23:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.689 [2024-11-20 10:23:51.820803] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:19.689 [2024-11-20 10:23:51.820858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825709 ] 00:06:19.689 [2024-11-20 10:23:51.906086] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.689 [2024-11-20 10:23:51.906108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.689 [2024-11-20 10:23:51.964319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.260 10:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.260 10:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.260 10:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1825449 00:06:20.260 10:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.260 10:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1825449 00:06:20.829 lslocks: write error 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1825449 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1825449 ']' 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1825449 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825449 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825449' 00:06:20.829 killing process with pid 1825449 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1825449 00:06:20.829 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1825449 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1825709 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1825709 ']' 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1825709 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825709 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825709' 00:06:21.399 killing process with pid 1825709 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1825709 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1825709 00:06:21.399 00:06:21.399 real 0m2.834s 00:06:21.399 user 0m3.190s 00:06:21.399 sys 0m0.844s 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.399 10:23:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.399 ************************************ 00:06:21.399 END TEST non_locking_app_on_locked_coremask 00:06:21.399 ************************************ 00:06:21.659 10:23:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.659 10:23:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.659 10:23:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.659 10:23:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.659 ************************************ 00:06:21.659 START TEST locking_app_on_unlocked_coremask 00:06:21.659 ************************************ 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1826081 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1826081 /var/tmp/spdk.sock 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1826081 ']' 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.659 10:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.659 [2024-11-20 10:23:53.875612] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:21.659 [2024-11-20 10:23:53.875662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826081 ] 00:06:21.659 [2024-11-20 10:23:53.958861] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.659 [2024-11-20 10:23:53.958882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.659 [2024-11-20 10:23:53.988894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1826331 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1826331 /var/tmp/spdk2.sock 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1826331 ']' 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.602 10:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 [2024-11-20 10:23:54.712439] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:22.602 [2024-11-20 10:23:54.712495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826331 ] 00:06:22.602 [2024-11-20 10:23:54.800565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.602 [2024-11-20 10:23:54.858579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.172 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.172 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.172 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1826331 00:06:23.172 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1826331 00:06:23.172 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.745 lslocks: write error 00:06:23.745 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1826081 00:06:23.745 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1826081 ']' 00:06:23.745 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1826081 00:06:23.745 10:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1826081 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1826081' 00:06:23.745 killing process with pid 1826081 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1826081 00:06:23.745 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1826081 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1826331 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1826331 ']' 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1826331 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1826331 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1826331' 00:06:24.316 killing process with pid 1826331 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1826331 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1826331 00:06:24.316 00:06:24.316 real 0m2.857s 00:06:24.316 user 0m3.209s 00:06:24.316 sys 0m0.858s 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.316 10:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.316 ************************************ 00:06:24.316 END TEST locking_app_on_unlocked_coremask 00:06:24.316 ************************************ 00:06:24.578 10:23:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.578 10:23:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.578 10:23:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.578 10:23:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.578 ************************************ 00:06:24.578 START TEST locking_app_on_locked_coremask 00:06:24.578 ************************************ 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1826786 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1826786 /var/tmp/spdk.sock 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1826786 ']' 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.578 10:23:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.578 [2024-11-20 10:23:56.822802] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:24.578 [2024-11-20 10:23:56.822856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826786 ] 00:06:24.578 [2024-11-20 10:23:56.909007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.578 [2024-11-20 10:23:56.947653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1826822 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1826822 /var/tmp/spdk2.sock 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1826822 /var/tmp/spdk2.sock 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1826822 /var/tmp/spdk2.sock 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1826822 ']' 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.520 10:23:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.520 [2024-11-20 10:23:57.676117] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:25.520 [2024-11-20 10:23:57.676176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826822 ] 00:06:25.520 [2024-11-20 10:23:57.765272] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1826786 has claimed it. 00:06:25.520 [2024-11-20 10:23:57.765308] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1826822) - No such process 00:06:26.090 ERROR: process (pid: 1826822) is no longer running 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1826786 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1826786 00:06:26.090 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.350 lslocks: write error 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1826786 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1826786 ']' 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1826786 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.350 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1826786 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1826786' 00:06:26.611 killing process with pid 1826786 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1826786 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1826786 00:06:26.611 00:06:26.611 real 0m2.179s 00:06:26.611 user 0m2.489s 00:06:26.611 sys 0m0.606s 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.611 10:23:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.611 ************************************ 00:06:26.611 END TEST locking_app_on_locked_coremask 00:06:26.611 ************************************ 00:06:26.611 10:23:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.611 10:23:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.611 10:23:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.611 10:23:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.872 ************************************ 00:06:26.872 START TEST locking_overlapped_coremask 00:06:26.872 ************************************ 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1827170 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1827170 /var/tmp/spdk.sock 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1827170 ']' 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.872 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.872 [2024-11-20 10:23:59.065345] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:26.872 [2024-11-20 10:23:59.065403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827170 ] 00:06:26.872 [2024-11-20 10:23:59.152792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.872 [2024-11-20 10:23:59.188507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.872 [2024-11-20 10:23:59.188660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.872 [2024-11-20 10:23:59.188662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.811 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1827486 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1827486 /var/tmp/spdk2.sock 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1827486 /var/tmp/spdk2.sock 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1827486 /var/tmp/spdk2.sock 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1827486 ']' 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.812 10:23:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.812 [2024-11-20 10:23:59.925190] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:27.812 [2024-11-20 10:23:59.925241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827486 ] 00:06:27.812 [2024-11-20 10:24:00.037005] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1827170 has claimed it. 00:06:27.812 [2024-11-20 10:24:00.037049] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1827486) - No such process 00:06:28.383 ERROR: process (pid: 1827486) is no longer running 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1827170 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1827170 ']' 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1827170 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827170 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827170' 00:06:28.383 killing process with pid 1827170 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1827170 00:06:28.383 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1827170 00:06:28.643 00:06:28.643 real 0m1.787s 00:06:28.643 user 0m5.181s 00:06:28.643 sys 0m0.387s 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.643 ************************************ 00:06:28.643 END TEST locking_overlapped_coremask 00:06:28.643 ************************************ 00:06:28.643 10:24:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.643 10:24:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.643 10:24:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.643 10:24:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.643 ************************************ 00:06:28.643 START TEST locking_overlapped_coremask_via_rpc 00:06:28.643 ************************************ 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1827587 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1827587 /var/tmp/spdk.sock 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1827587 ']' 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.643 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.644 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.644 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.644 10:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.644 [2024-11-20 10:24:00.936703] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:28.644 [2024-11-20 10:24:00.936762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827587 ] 00:06:28.904 [2024-11-20 10:24:01.024109] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.904 [2024-11-20 10:24:01.024139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.904 [2024-11-20 10:24:01.059789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.904 [2024-11-20 10:24:01.059938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.904 [2024-11-20 10:24:01.059940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1827965 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1827965 /var/tmp/spdk2.sock 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1827965 ']' 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.475 10:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.475 [2024-11-20 10:24:01.758181] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:29.475 [2024-11-20 10:24:01.758257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827965 ] 00:06:29.737 [2024-11-20 10:24:01.872782] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.737 [2024-11-20 10:24:01.872812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.737 [2024-11-20 10:24:01.950760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.737 [2024-11-20 10:24:01.950902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.737 [2024-11-20 10:24:01.950903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.308 [2024-11-20 10:24:02.559237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1827587 has claimed it. 00:06:30.308 request: 00:06:30.308 { 00:06:30.308 "method": "framework_enable_cpumask_locks", 00:06:30.308 "req_id": 1 00:06:30.308 } 00:06:30.308 Got JSON-RPC error response 00:06:30.308 response: 00:06:30.308 { 00:06:30.308 "code": -32603, 00:06:30.308 "message": "Failed to claim CPU core: 2" 00:06:30.308 } 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1827587 /var/tmp/spdk.sock 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1827587 ']' 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.308 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1827965 /var/tmp/spdk2.sock 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1827965 ']' 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.569 00:06:30.569 real 0m2.066s 00:06:30.569 user 0m0.849s 00:06:30.569 sys 0m0.151s 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.569 10:24:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.569 ************************************ 00:06:30.569 END TEST locking_overlapped_coremask_via_rpc 00:06:30.569 ************************************ 00:06:30.830 10:24:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.830 10:24:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1827587 ]] 00:06:30.830 10:24:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1827587 00:06:30.830 10:24:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1827587 ']' 00:06:30.830 10:24:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1827587 00:06:30.830 10:24:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.830 10:24:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.830 10:24:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827587 00:06:30.830 10:24:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.830 10:24:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.830 10:24:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827587' 00:06:30.830 killing process with pid 1827587 00:06:30.830 10:24:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1827587 00:06:30.830 10:24:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1827587 00:06:31.091 10:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1827965 ]] 00:06:31.091 10:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1827965 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1827965 ']' 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1827965 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827965 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827965' 00:06:31.091 killing process with pid 1827965 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1827965 00:06:31.091 10:24:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1827965 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1827587 ]] 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1827587 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1827587 ']' 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1827587 00:06:31.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1827587) - No such process 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1827587 is not found' 00:06:31.367 Process with pid 1827587 is not found 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1827965 ]] 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1827965 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1827965 ']' 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1827965 00:06:31.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1827965) - No such process 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1827965 is not found' 00:06:31.367 Process with pid 1827965 is not found 00:06:31.367 10:24:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.367 00:06:31.367 real 0m15.899s 00:06:31.367 user 0m28.033s 00:06:31.367 sys 0m4.773s 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.367 10:24:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.367 ************************************ 00:06:31.367 END TEST cpu_locks 00:06:31.367 ************************************ 00:06:31.367 00:06:31.367 real 0m41.849s 00:06:31.367 user 1m22.507s 00:06:31.367 sys 0m8.136s 00:06:31.367 10:24:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.367 10:24:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.367 ************************************ 00:06:31.367 END TEST event 00:06:31.367 ************************************ 00:06:31.367 10:24:03 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:31.367 10:24:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.367 10:24:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.367 10:24:03 -- common/autotest_common.sh@10 -- # set +x 00:06:31.367 ************************************ 00:06:31.367 START TEST thread 00:06:31.367 ************************************ 00:06:31.367 10:24:03 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:31.367 * Looking for test storage... 00:06:31.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:31.367 10:24:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.367 10:24:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.367 10:24:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.628 10:24:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.628 10:24:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.628 10:24:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.628 10:24:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.628 10:24:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.628 10:24:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.628 10:24:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.628 10:24:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.628 10:24:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.628 10:24:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.628 10:24:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.628 10:24:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:31.628 10:24:03 thread -- scripts/common.sh@345 -- # : 1 00:06:31.628 10:24:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.628 10:24:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.628 10:24:03 thread -- scripts/common.sh@365 -- # decimal 1 00:06:31.628 10:24:03 thread -- scripts/common.sh@353 -- # local d=1 00:06:31.628 10:24:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.628 10:24:03 thread -- scripts/common.sh@355 -- # echo 1 00:06:31.628 10:24:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.628 10:24:03 thread -- scripts/common.sh@366 -- # decimal 2 00:06:31.628 10:24:03 thread -- scripts/common.sh@353 -- # local d=2 00:06:31.628 10:24:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.628 10:24:03 thread -- scripts/common.sh@355 -- # echo 2 00:06:31.628 10:24:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.628 10:24:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.628 10:24:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.628 10:24:03 thread -- scripts/common.sh@368 -- # return 0 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.628 --rc genhtml_branch_coverage=1 00:06:31.628 --rc genhtml_function_coverage=1 00:06:31.628 --rc genhtml_legend=1 00:06:31.628 --rc geninfo_all_blocks=1 00:06:31.628 --rc geninfo_unexecuted_blocks=1 00:06:31.628 00:06:31.628 ' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.628 --rc genhtml_branch_coverage=1 00:06:31.628 --rc genhtml_function_coverage=1 00:06:31.628 --rc genhtml_legend=1 00:06:31.628 --rc geninfo_all_blocks=1 00:06:31.628 --rc geninfo_unexecuted_blocks=1 00:06:31.628 00:06:31.628 ' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.628 --rc genhtml_branch_coverage=1 00:06:31.628 --rc genhtml_function_coverage=1 00:06:31.628 --rc genhtml_legend=1 00:06:31.628 --rc geninfo_all_blocks=1 00:06:31.628 --rc geninfo_unexecuted_blocks=1 00:06:31.628 00:06:31.628 ' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.628 --rc genhtml_branch_coverage=1 00:06:31.628 --rc genhtml_function_coverage=1 00:06:31.628 --rc genhtml_legend=1 00:06:31.628 --rc geninfo_all_blocks=1 00:06:31.628 --rc geninfo_unexecuted_blocks=1 00:06:31.628 00:06:31.628 ' 00:06:31.628 10:24:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.628 10:24:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.628 ************************************ 00:06:31.628 START TEST thread_poller_perf 00:06:31.628 ************************************ 00:06:31.628 10:24:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.628 [2024-11-20 10:24:03.879319] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:31.628 [2024-11-20 10:24:03.879428] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828424 ] 00:06:31.628 [2024-11-20 10:24:03.969124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.888 [2024-11-20 10:24:04.008548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.888 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.832 [2024-11-20T09:24:05.208Z] ====================================== 00:06:32.832 [2024-11-20T09:24:05.209Z] busy:2407159544 (cyc) 00:06:32.833 [2024-11-20T09:24:05.209Z] total_run_count: 418000 00:06:32.833 [2024-11-20T09:24:05.209Z] tsc_hz: 2400000000 (cyc) 00:06:32.833 [2024-11-20T09:24:05.209Z] ====================================== 00:06:32.833 [2024-11-20T09:24:05.209Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:06:32.833 00:06:32.833 real 0m1.184s 00:06:32.833 user 0m1.097s 00:06:32.833 sys 0m0.082s 00:06:32.833 10:24:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.833 10:24:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.833 ************************************ 00:06:32.833 END TEST thread_poller_perf 00:06:32.833 ************************************ 00:06:32.833 10:24:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.833 10:24:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.833 10:24:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.833 10:24:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.833 ************************************ 00:06:32.833 START TEST thread_poller_perf 00:06:32.833 ************************************ 00:06:32.833 10:24:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.833 [2024-11-20 10:24:05.139559] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:32.833 [2024-11-20 10:24:05.139664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828778 ] 00:06:33.096 [2024-11-20 10:24:05.226012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.096 [2024-11-20 10:24:05.261129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.096 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:34.038 [2024-11-20T09:24:06.414Z] ====================================== 00:06:34.038 [2024-11-20T09:24:06.414Z] busy:2401293450 (cyc) 00:06:34.038 [2024-11-20T09:24:06.414Z] total_run_count: 5565000 00:06:34.038 [2024-11-20T09:24:06.414Z] tsc_hz: 2400000000 (cyc) 00:06:34.038 [2024-11-20T09:24:06.414Z] ====================================== 00:06:34.038 [2024-11-20T09:24:06.414Z] poller_cost: 431 (cyc), 179 (nsec) 00:06:34.038 00:06:34.038 real 0m1.169s 00:06:34.038 user 0m1.086s 00:06:34.038 sys 0m0.079s 00:06:34.038 10:24:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.038 10:24:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.038 ************************************ 00:06:34.038 END TEST thread_poller_perf 00:06:34.038 ************************************ 00:06:34.038 10:24:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.038 00:06:34.038 real 0m2.711s 00:06:34.038 user 0m2.362s 00:06:34.038 sys 0m0.362s 00:06:34.038 10:24:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.038 10:24:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.038 ************************************ 00:06:34.038 END TEST thread 00:06:34.038 ************************************ 00:06:34.038 10:24:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:34.038 10:24:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.038 10:24:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.038 10:24:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.038 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:06:34.038 ************************************ 00:06:34.038 START TEST app_cmdline 00:06:34.038 ************************************ 00:06:34.038 10:24:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.299 * Looking for test storage... 00:06:34.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.299 10:24:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.299 --rc genhtml_branch_coverage=1 00:06:34.299 --rc genhtml_function_coverage=1 00:06:34.299 --rc genhtml_legend=1 00:06:34.299 --rc geninfo_all_blocks=1 00:06:34.299 --rc geninfo_unexecuted_blocks=1 00:06:34.299 00:06:34.299 ' 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.299 --rc genhtml_branch_coverage=1 00:06:34.299 --rc genhtml_function_coverage=1 00:06:34.299 --rc genhtml_legend=1 00:06:34.299 --rc geninfo_all_blocks=1 00:06:34.299 --rc geninfo_unexecuted_blocks=1 00:06:34.299 00:06:34.299 ' 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.299 --rc genhtml_branch_coverage=1 00:06:34.299 --rc genhtml_function_coverage=1 00:06:34.299 --rc genhtml_legend=1 00:06:34.299 --rc geninfo_all_blocks=1 00:06:34.299 --rc geninfo_unexecuted_blocks=1 00:06:34.299 00:06:34.299 ' 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.299 --rc genhtml_branch_coverage=1 00:06:34.299 --rc genhtml_function_coverage=1 00:06:34.299 --rc genhtml_legend=1 00:06:34.299 --rc geninfo_all_blocks=1 00:06:34.299 --rc geninfo_unexecuted_blocks=1 00:06:34.299 00:06:34.299 ' 00:06:34.299 10:24:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.299 10:24:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1829157 00:06:34.299 10:24:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1829157 00:06:34.299 10:24:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1829157 ']' 00:06:34.300 10:24:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.300 10:24:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.300 10:24:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.300 10:24:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.300 10:24:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.300 10:24:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 [2024-11-20 10:24:06.670665] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:34.300 [2024-11-20 10:24:06.670738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829157 ] 00:06:34.559 [2024-11-20 10:24:06.758878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.559 [2024-11-20 10:24:06.799505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.129 10:24:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.129 10:24:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:35.129 10:24:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.391 { 00:06:35.391 "version": "SPDK v25.01-pre git sha1 a25b16198", 00:06:35.391 "fields": { 00:06:35.391 "major": 25, 00:06:35.391 "minor": 1, 00:06:35.391 "patch": 0, 00:06:35.391 "suffix": "-pre", 00:06:35.391 "commit": "a25b16198" 00:06:35.391 } 00:06:35.391 } 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.391 10:24:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.391 10:24:07 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.652 request: 00:06:35.652 { 00:06:35.652 "method": "env_dpdk_get_mem_stats", 00:06:35.652 "req_id": 1 00:06:35.652 } 00:06:35.652 Got JSON-RPC error response 00:06:35.652 response: 00:06:35.652 { 00:06:35.652 "code": -32601, 00:06:35.652 "message": "Method not found" 00:06:35.652 } 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.652 10:24:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1829157 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1829157 ']' 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1829157 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829157 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829157' 00:06:35.652 killing process with pid 1829157 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 1829157 00:06:35.652 10:24:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 1829157 00:06:35.913 00:06:35.913 real 0m1.735s 00:06:35.913 user 0m2.085s 00:06:35.913 sys 0m0.478s 00:06:35.913 10:24:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.913 10:24:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 END TEST app_cmdline 00:06:35.913 ************************************ 00:06:35.913 10:24:08 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.913 10:24:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.913 10:24:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.913 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 START TEST version 00:06:35.913 ************************************ 00:06:35.913 10:24:08 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.174 * Looking for test storage... 00:06:36.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.174 10:24:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.174 10:24:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.174 10:24:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.174 10:24:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.174 10:24:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.174 10:24:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.174 10:24:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.174 10:24:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.174 10:24:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.174 10:24:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.174 10:24:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.174 10:24:08 version -- scripts/common.sh@344 -- # case "$op" in 00:06:36.174 10:24:08 version -- scripts/common.sh@345 -- # : 1 00:06:36.174 10:24:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.174 10:24:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.174 10:24:08 version -- scripts/common.sh@365 -- # decimal 1 00:06:36.174 10:24:08 version -- scripts/common.sh@353 -- # local d=1 00:06:36.174 10:24:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.174 10:24:08 version -- scripts/common.sh@355 -- # echo 1 00:06:36.174 10:24:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.174 10:24:08 version -- scripts/common.sh@366 -- # decimal 2 00:06:36.174 10:24:08 version -- scripts/common.sh@353 -- # local d=2 00:06:36.174 10:24:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.174 10:24:08 version -- scripts/common.sh@355 -- # echo 2 00:06:36.174 10:24:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.174 10:24:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.174 10:24:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.174 10:24:08 version -- scripts/common.sh@368 -- # return 0 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.174 --rc genhtml_branch_coverage=1 00:06:36.174 --rc genhtml_function_coverage=1 00:06:36.174 --rc genhtml_legend=1 00:06:36.174 --rc geninfo_all_blocks=1 00:06:36.174 --rc geninfo_unexecuted_blocks=1 00:06:36.174 00:06:36.174 ' 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.174 --rc genhtml_branch_coverage=1 00:06:36.174 --rc genhtml_function_coverage=1 00:06:36.174 --rc genhtml_legend=1 00:06:36.174 --rc geninfo_all_blocks=1 00:06:36.174 --rc geninfo_unexecuted_blocks=1 00:06:36.174 00:06:36.174 ' 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.174 --rc genhtml_branch_coverage=1 00:06:36.174 --rc genhtml_function_coverage=1 00:06:36.174 --rc genhtml_legend=1 00:06:36.174 --rc geninfo_all_blocks=1 00:06:36.174 --rc geninfo_unexecuted_blocks=1 00:06:36.174 00:06:36.174 ' 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.174 --rc genhtml_branch_coverage=1 00:06:36.174 --rc genhtml_function_coverage=1 00:06:36.174 --rc genhtml_legend=1 00:06:36.174 --rc geninfo_all_blocks=1 00:06:36.174 --rc geninfo_unexecuted_blocks=1 00:06:36.174 00:06:36.174 ' 00:06:36.174 10:24:08 version -- app/version.sh@17 -- # get_header_version major 00:06:36.174 10:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # cut -f2 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.174 10:24:08 version -- app/version.sh@17 -- # major=25 00:06:36.174 10:24:08 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.174 10:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # cut -f2 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.174 10:24:08 version -- app/version.sh@18 -- # minor=1 00:06:36.174 10:24:08 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.174 10:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # cut -f2 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.174 10:24:08 version -- app/version.sh@19 -- # patch=0 00:06:36.174 10:24:08 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.174 10:24:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # cut -f2 00:06:36.174 10:24:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.174 10:24:08 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.174 10:24:08 version -- app/version.sh@22 -- # version=25.1 00:06:36.174 10:24:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.174 10:24:08 version -- app/version.sh@28 -- # version=25.1rc0 00:06:36.174 10:24:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.174 10:24:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.174 10:24:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:36.174 10:24:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:36.174 00:06:36.174 real 0m0.285s 00:06:36.174 user 0m0.168s 00:06:36.174 sys 0m0.169s 00:06:36.174 10:24:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.174 10:24:08 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.174 ************************************ 00:06:36.174 END TEST version 00:06:36.174 ************************************ 00:06:36.174 10:24:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:36.174 10:24:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:36.435 10:24:08 -- spdk/autotest.sh@194 -- # uname -s 00:06:36.435 10:24:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:36.435 10:24:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.435 10:24:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.435 10:24:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:36.435 10:24:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.435 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:06:36.435 10:24:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:36.435 10:24:08 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:36.435 10:24:08 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.435 10:24:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.435 10:24:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.435 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:06:36.435 ************************************ 00:06:36.436 START TEST nvmf_tcp 00:06:36.436 ************************************ 00:06:36.436 10:24:08 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.436 * Looking for test storage... 00:06:36.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.436 10:24:08 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.436 10:24:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.436 10:24:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.696 10:24:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.696 10:24:08 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.697 10:24:08 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.697 --rc genhtml_branch_coverage=1 00:06:36.697 --rc genhtml_function_coverage=1 00:06:36.697 --rc genhtml_legend=1 00:06:36.697 --rc geninfo_all_blocks=1 00:06:36.697 --rc geninfo_unexecuted_blocks=1 00:06:36.697 00:06:36.697 ' 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.697 --rc genhtml_branch_coverage=1 00:06:36.697 --rc genhtml_function_coverage=1 00:06:36.697 --rc genhtml_legend=1 00:06:36.697 --rc geninfo_all_blocks=1 00:06:36.697 --rc geninfo_unexecuted_blocks=1 00:06:36.697 00:06:36.697 ' 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.697 --rc genhtml_branch_coverage=1 00:06:36.697 --rc genhtml_function_coverage=1 00:06:36.697 --rc genhtml_legend=1 00:06:36.697 --rc geninfo_all_blocks=1 00:06:36.697 --rc geninfo_unexecuted_blocks=1 00:06:36.697 00:06:36.697 ' 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.697 --rc genhtml_branch_coverage=1 00:06:36.697 --rc genhtml_function_coverage=1 00:06:36.697 --rc genhtml_legend=1 00:06:36.697 --rc geninfo_all_blocks=1 00:06:36.697 --rc geninfo_unexecuted_blocks=1 00:06:36.697 00:06:36.697 ' 00:06:36.697 10:24:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.697 10:24:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.697 10:24:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.697 10:24:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.697 ************************************ 00:06:36.697 START TEST nvmf_target_core 00:06:36.697 ************************************ 00:06:36.697 10:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:36.697 * Looking for test storage... 00:06:36.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.697 10:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.697 10:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.697 10:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.697 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.957 --rc genhtml_branch_coverage=1 00:06:36.957 --rc genhtml_function_coverage=1 00:06:36.957 --rc genhtml_legend=1 00:06:36.957 --rc geninfo_all_blocks=1 00:06:36.957 --rc geninfo_unexecuted_blocks=1 00:06:36.957 00:06:36.957 ' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.957 --rc genhtml_branch_coverage=1 00:06:36.957 --rc genhtml_function_coverage=1 00:06:36.957 --rc genhtml_legend=1 00:06:36.957 --rc geninfo_all_blocks=1 00:06:36.957 --rc geninfo_unexecuted_blocks=1 00:06:36.957 00:06:36.957 ' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.957 --rc genhtml_branch_coverage=1 00:06:36.957 --rc genhtml_function_coverage=1 00:06:36.957 --rc genhtml_legend=1 00:06:36.957 --rc geninfo_all_blocks=1 00:06:36.957 --rc geninfo_unexecuted_blocks=1 00:06:36.957 00:06:36.957 ' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.957 --rc genhtml_branch_coverage=1 00:06:36.957 --rc genhtml_function_coverage=1 00:06:36.957 --rc genhtml_legend=1 00:06:36.957 --rc geninfo_all_blocks=1 00:06:36.957 --rc geninfo_unexecuted_blocks=1 00:06:36.957 00:06:36.957 ' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 ************************************ 00:06:36.957 START TEST nvmf_abort 00:06:36.957 ************************************ 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:36.957 * Looking for test storage... 00:06:36.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.957 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.218 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.219 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:45.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.361 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:45.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:45.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:45.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:06:45.362 00:06:45.362 --- 10.0.0.2 ping statistics --- 00:06:45.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.362 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:06:45.362 00:06:45.362 --- 10.0.0.1 ping statistics --- 00:06:45.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.362 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1834030 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1834030 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1834030 ']' 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.362 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.362 [2024-11-20 10:24:16.978959] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:45.362 [2024-11-20 10:24:16.979025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.362 [2024-11-20 10:24:17.080874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.362 [2024-11-20 10:24:17.134679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.362 [2024-11-20 10:24:17.134729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.362 [2024-11-20 10:24:17.134738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.362 [2024-11-20 10:24:17.134745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.362 [2024-11-20 10:24:17.134752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.362 [2024-11-20 10:24:17.136567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.362 [2024-11-20 10:24:17.136724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.362 [2024-11-20 10:24:17.136726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 [2024-11-20 10:24:17.855804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 Malloc0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 Delay0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 [2024-11-20 10:24:17.938771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.624 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:45.885 [2024-11-20 10:24:18.048099] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:48.429 Initializing NVMe Controllers 00:06:48.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:48.429 controller IO queue size 128 less than required 00:06:48.429 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:48.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:48.429 Initialization complete. Launching workers. 00:06:48.429 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28546 00:06:48.429 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28607, failed to submit 62 00:06:48.429 success 28550, unsuccessful 57, failed 0 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:48.429 rmmod nvme_tcp 00:06:48.429 rmmod nvme_fabrics 00:06:48.429 rmmod nvme_keyring 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1834030 ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1834030 ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1834030' 00:06:48.429 killing process with pid 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1834030 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.429 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:50.336 00:06:50.336 real 0m13.406s 00:06:50.336 user 0m14.075s 00:06:50.336 sys 0m6.604s 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 END TEST nvmf_abort 00:06:50.336 ************************************ 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 START TEST nvmf_ns_hotplug_stress 00:06:50.336 ************************************ 00:06:50.336 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:50.597 * Looking for test storage... 00:06:50.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.597 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.597 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.598 --rc genhtml_branch_coverage=1 00:06:50.598 --rc genhtml_function_coverage=1 00:06:50.598 --rc genhtml_legend=1 00:06:50.598 --rc geninfo_all_blocks=1 00:06:50.598 --rc geninfo_unexecuted_blocks=1 00:06:50.598 00:06:50.598 ' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.598 --rc genhtml_branch_coverage=1 00:06:50.598 --rc genhtml_function_coverage=1 00:06:50.598 --rc genhtml_legend=1 00:06:50.598 --rc geninfo_all_blocks=1 00:06:50.598 --rc geninfo_unexecuted_blocks=1 00:06:50.598 00:06:50.598 ' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.598 --rc genhtml_branch_coverage=1 00:06:50.598 --rc genhtml_function_coverage=1 00:06:50.598 --rc genhtml_legend=1 00:06:50.598 --rc geninfo_all_blocks=1 00:06:50.598 --rc geninfo_unexecuted_blocks=1 00:06:50.598 00:06:50.598 ' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.598 --rc genhtml_branch_coverage=1 00:06:50.598 --rc genhtml_function_coverage=1 00:06:50.598 --rc genhtml_legend=1 00:06:50.598 --rc geninfo_all_blocks=1 00:06:50.598 --rc geninfo_unexecuted_blocks=1 00:06:50.598 00:06:50.598 ' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.598 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.599 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.846 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:58.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:58.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:58.847 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:58.847 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:06:58.847 00:06:58.847 --- 10.0.0.2 ping statistics --- 00:06:58.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.847 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:06:58.847 00:06:58.847 --- 10.0.0.1 ping statistics --- 00:06:58.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.847 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:58.847 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1838869 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1838869 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1838869 ']' 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.848 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 [2024-11-20 10:24:30.433960] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:06:58.848 [2024-11-20 10:24:30.434034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.848 [2024-11-20 10:24:30.535010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.848 [2024-11-20 10:24:30.587042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.848 [2024-11-20 10:24:30.587091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.848 [2024-11-20 10:24:30.587100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.848 [2024-11-20 10:24:30.587107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.848 [2024-11-20 10:24:30.587113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.848 [2024-11-20 10:24:30.589213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.848 [2024-11-20 10:24:30.589399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.848 [2024-11-20 10:24:30.589399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:59.109 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:59.109 [2024-11-20 10:24:31.450183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.369 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.370 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.630 [2024-11-20 10:24:31.841116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.630 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.891 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:59.891 Malloc0 00:07:00.152 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:00.152 Delay0 00:07:00.152 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.412 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:00.672 NULL1 00:07:00.672 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:00.672 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1839557 00:07:00.672 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:00.672 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:00.672 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.932 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.193 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:01.193 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:01.193 true 00:07:01.454 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:01.454 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.454 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.715 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:01.715 10:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:01.975 true 00:07:01.975 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:01.975 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.975 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.236 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:02.236 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:02.496 true 00:07:02.496 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:02.496 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.496 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.755 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:02.756 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:03.017 true 00:07:03.017 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:03.017 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.278 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.278 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:03.278 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:03.538 true 00:07:03.538 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:03.538 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.799 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.799 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:03.799 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:04.059 true 00:07:04.059 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:04.059 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.318 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.318 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:04.318 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:04.577 true 00:07:04.577 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:04.577 10:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.837 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.097 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:05.097 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:05.097 true 00:07:05.097 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:05.097 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.357 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.616 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:05.616 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:05.616 true 00:07:05.616 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:05.616 10:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.875 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.135 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:06.135 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:06.135 true 00:07:06.395 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:06.395 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.395 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.656 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:06.656 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:06.925 true 00:07:06.925 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:06.925 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.925 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.189 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:07.189 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:07.451 true 00:07:07.451 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:07.451 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.451 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.711 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:07.711 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:07.970 true 00:07:07.970 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:07.970 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.231 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.231 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:08.231 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:08.492 true 00:07:08.492 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:08.492 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.753 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.753 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:08.753 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:09.014 true 00:07:09.014 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:09.014 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.275 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.536 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:09.536 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:09.536 true 00:07:09.536 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:09.536 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.795 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.056 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:10.056 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:10.056 true 00:07:10.056 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:10.056 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.317 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:10.576 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:10.576 true 00:07:10.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:10.836 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.836 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.096 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:11.096 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:11.357 true 00:07:11.357 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:11.357 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.357 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.618 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:11.618 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:11.879 true 00:07:11.879 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:11.879 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.140 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.140 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:12.140 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:12.401 true 00:07:12.401 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:12.401 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.663 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.663 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:12.663 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:12.924 true 00:07:12.924 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:12.924 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.184 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.445 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:13.445 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:13.445 true 00:07:13.445 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:13.445 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.706 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.967 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:13.967 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:13.967 true 00:07:13.967 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:13.967 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.227 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.488 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:14.488 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:14.749 true 00:07:14.749 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:14.749 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.749 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.010 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:15.010 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:15.271 true 00:07:15.271 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:15.271 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.271 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.532 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:15.532 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:15.792 true 00:07:15.792 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:15.792 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.053 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.053 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:16.053 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:16.313 true 00:07:16.313 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:16.313 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.573 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.573 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:16.573 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:16.833 true 00:07:16.833 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:16.833 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.093 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.353 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:17.353 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:17.353 true 00:07:17.353 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:17.353 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.612 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.873 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:17.873 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:17.873 true 00:07:17.873 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:17.873 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.133 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.393 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:18.393 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:18.393 true 00:07:18.393 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:18.393 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.651 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.912 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:18.912 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:18.912 true 00:07:19.172 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:19.172 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.172 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.432 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:19.432 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:19.692 true 00:07:19.692 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:19.692 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.692 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.952 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:19.952 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:20.212 true 00:07:20.212 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:20.212 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.472 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.472 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:20.472 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:20.732 true 00:07:20.732 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:20.732 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.992 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.992 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:20.992 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:21.252 true 00:07:21.252 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:21.252 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.513 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.774 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:21.774 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:21.774 true 00:07:21.774 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:21.774 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.035 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.294 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:22.294 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:22.294 true 00:07:22.294 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:22.294 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.554 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.815 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:22.815 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:22.815 true 00:07:23.075 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:23.075 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.075 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.335 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:23.335 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:23.596 true 00:07:23.596 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:23.596 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.596 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.856 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:23.856 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:24.117 true 00:07:24.117 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:24.117 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.117 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.377 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:24.377 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:24.638 true 00:07:24.638 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:24.638 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.898 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.898 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:24.898 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:25.158 true 00:07:25.158 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:25.158 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.418 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.418 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:25.418 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:25.679 true 00:07:25.679 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:25.679 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.939 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.199 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:26.199 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:26.199 true 00:07:26.199 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:26.199 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.459 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.718 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:26.718 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:26.718 true 00:07:26.718 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:26.718 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.978 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.240 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:27.240 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:27.502 true 00:07:27.502 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:27.502 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.502 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.763 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:27.763 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:28.023 true 00:07:28.023 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:28.023 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.023 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.284 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:28.284 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:28.545 true 00:07:28.545 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:28.545 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.806 10:25:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.806 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:28.806 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:29.067 true 00:07:29.067 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:29.067 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.327 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.327 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:29.327 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:29.587 true 00:07:29.587 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:29.587 10:25:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.847 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.847 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:29.847 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:30.108 true 00:07:30.108 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:30.108 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.368 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.629 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:30.629 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:30.629 true 00:07:30.629 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:30.629 10:25:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.889 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.151 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:31.151 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:31.151 Initializing NVMe Controllers 00:07:31.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.151 Controller IO queue size 128, less than required. 00:07:31.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:31.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:31.151 Initialization complete. Launching workers. 00:07:31.151 ======================================================== 00:07:31.151 Latency(us) 00:07:31.151 Device Information : IOPS MiB/s Average min max 00:07:31.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30875.63 15.08 4145.57 1137.45 8086.67 00:07:31.151 ======================================================== 00:07:31.151 Total : 30875.63 15.08 4145.57 1137.45 8086.67 00:07:31.151 00:07:31.151 true 00:07:31.151 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1839557 00:07:31.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1839557) - No such process 00:07:31.151 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1839557 00:07:31.151 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.446 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.729 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:31.730 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:31.730 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:31.730 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.730 10:25:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:31.730 null0 00:07:31.730 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.730 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.730 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:31.994 null1 00:07:31.994 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.994 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.994 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:32.254 null2 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:32.254 null3 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.254 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:32.515 null4 00:07:32.515 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.515 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.515 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:32.777 null5 00:07:32.777 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.777 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.777 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:32.777 null6 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:33.038 null7 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:33.038 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1846108 1846109 1846112 1846113 1846115 1846117 1846119 1846120 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.039 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.300 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.562 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.824 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.824 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.824 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.824 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.824 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.825 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.087 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.349 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.610 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.870 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.870 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.870 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.129 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.389 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.650 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.911 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.912 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.173 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.434 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.435 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.435 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.697 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.698 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.698 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.698 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.959 rmmod nvme_tcp 00:07:36.959 rmmod nvme_fabrics 00:07:36.959 rmmod nvme_keyring 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1838869 ']' 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1838869 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1838869 ']' 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1838869 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.959 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838869 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838869' 00:07:37.220 killing process with pid 1838869 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1838869 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1838869 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.220 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:39.769 00:07:39.769 real 0m48.932s 00:07:39.769 user 3m19.613s 00:07:39.769 sys 0m17.680s 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.769 ************************************ 00:07:39.769 END TEST nvmf_ns_hotplug_stress 00:07:39.769 ************************************ 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.769 ************************************ 00:07:39.769 START TEST nvmf_delete_subsystem 00:07:39.769 ************************************ 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.769 * Looking for test storage... 00:07:39.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.769 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.770 --rc genhtml_branch_coverage=1 00:07:39.770 --rc genhtml_function_coverage=1 00:07:39.770 --rc genhtml_legend=1 00:07:39.770 --rc geninfo_all_blocks=1 00:07:39.770 --rc geninfo_unexecuted_blocks=1 00:07:39.770 00:07:39.770 ' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.770 --rc genhtml_branch_coverage=1 00:07:39.770 --rc genhtml_function_coverage=1 00:07:39.770 --rc genhtml_legend=1 00:07:39.770 --rc geninfo_all_blocks=1 00:07:39.770 --rc geninfo_unexecuted_blocks=1 00:07:39.770 00:07:39.770 ' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.770 --rc genhtml_branch_coverage=1 00:07:39.770 --rc genhtml_function_coverage=1 00:07:39.770 --rc genhtml_legend=1 00:07:39.770 --rc geninfo_all_blocks=1 00:07:39.770 --rc geninfo_unexecuted_blocks=1 00:07:39.770 00:07:39.770 ' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.770 --rc genhtml_branch_coverage=1 00:07:39.770 --rc genhtml_function_coverage=1 00:07:39.770 --rc genhtml_legend=1 00:07:39.770 --rc geninfo_all_blocks=1 00:07:39.770 --rc geninfo_unexecuted_blocks=1 00:07:39.770 00:07:39.770 ' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:39.770 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:47.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:47.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:47.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:47.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.916 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.916 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:07:47.917 00:07:47.917 --- 10.0.0.2 ping statistics --- 00:07:47.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.917 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:07:47.917 00:07:47.917 --- 10.0.0.1 ping statistics --- 00:07:47.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.917 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1851293 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1851293 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1851293 ']' 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.917 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 [2024-11-20 10:25:19.385769] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:07:47.917 [2024-11-20 10:25:19.385837] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.917 [2024-11-20 10:25:19.485135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.917 [2024-11-20 10:25:19.536354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.917 [2024-11-20 10:25:19.536400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.917 [2024-11-20 10:25:19.536409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.917 [2024-11-20 10:25:19.536416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.917 [2024-11-20 10:25:19.536423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.917 [2024-11-20 10:25:19.538049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.917 [2024-11-20 10:25:19.538054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 [2024-11-20 10:25:20.249258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.917 [2024-11-20 10:25:20.273550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.917 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.179 NULL1 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.179 Delay0 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.179 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.180 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.180 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1851638 00:07:48.180 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:48.180 10:25:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:48.180 [2024-11-20 10:25:20.400612] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:50.095 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.095 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.095 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 [2024-11-20 10:25:22.527976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce52c0 is same with the state(6) to be set 00:07:50.357 starting I/O failed: -6 00:07:50.357 starting I/O failed: -6 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Write completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 starting I/O failed: -6 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 [2024-11-20 10:25:22.531126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcaf8000c40 is same with the state(6) to be set 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.357 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Write completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 Read completed with error (sct=0, sc=8) 00:07:50.358 [2024-11-20 10:25:22.531802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcaf800d490 is same with the state(6) to be set 00:07:51.301 [2024-11-20 10:25:23.500896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce69a0 is same with the state(6) to be set 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 [2024-11-20 10:25:23.532372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce54a0 is same with the state(6) to be set 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 [2024-11-20 10:25:23.533488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5860 is same with the state(6) to be set 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 [2024-11-20 10:25:23.533723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcaf800d020 is same with the state(6) to be set 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.301 Write completed with error (sct=0, sc=8) 00:07:51.301 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Write completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 Read completed with error (sct=0, sc=8) 00:07:51.302 [2024-11-20 10:25:23.533828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcaf800d7c0 is same with the state(6) to be set 00:07:51.302 Initializing NVMe Controllers 00:07:51.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.302 Controller IO queue size 128, less than required. 00:07:51.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:51.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:51.302 Initialization complete. Launching workers. 00:07:51.302 ======================================================== 00:07:51.302 Latency(us) 00:07:51.302 Device Information : IOPS MiB/s Average min max 00:07:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.10 0.09 898561.70 446.18 1009037.99 00:07:51.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.80 0.07 1012162.75 424.92 2002658.03 00:07:51.302 ======================================================== 00:07:51.302 Total : 336.90 0.16 948398.50 424.92 2002658.03 00:07:51.302 00:07:51.302 [2024-11-20 10:25:23.534418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce69a0 (9): Bad file descriptor 00:07:51.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:51.302 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.302 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:51.302 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851638 00:07:51.302 10:25:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851638 00:07:51.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1851638) - No such process 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1851638 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1851638 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1851638 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.874 [2024-11-20 10:25:24.064076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1852324 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:51.874 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.874 [2024-11-20 10:25:24.152236] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:52.445 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.445 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:52.445 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.016 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.016 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:53.016 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.276 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.276 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:53.276 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:53.847 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:53.847 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:53.847 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.418 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.418 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:54.418 10:25:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.990 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.990 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:54.990 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.990 Initializing NVMe Controllers 00:07:54.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.990 Controller IO queue size 128, less than required. 00:07:54.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:54.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:54.990 Initialization complete. Launching workers. 00:07:54.990 ======================================================== 00:07:54.990 Latency(us) 00:07:54.990 Device Information : IOPS MiB/s Average min max 00:07:54.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001896.15 1000285.13 1006262.18 00:07:54.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002766.70 1000266.66 1008683.66 00:07:54.990 ======================================================== 00:07:54.990 Total : 256.00 0.12 1002331.43 1000266.66 1008683.66 00:07:54.990 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852324 00:07:55.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1852324) - No such process 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1852324 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.253 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.514 rmmod nvme_tcp 00:07:55.514 rmmod nvme_fabrics 00:07:55.514 rmmod nvme_keyring 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1851293 ']' 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1851293 ']' 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851293' 00:07:55.514 killing process with pid 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1851293 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.514 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.515 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.060 00:07:58.060 real 0m18.281s 00:07:58.060 user 0m30.797s 00:07:58.060 sys 0m6.758s 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.060 ************************************ 00:07:58.060 END TEST nvmf_delete_subsystem 00:07:58.060 ************************************ 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.060 10:25:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.060 ************************************ 00:07:58.060 START TEST nvmf_host_management 00:07:58.060 ************************************ 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:58.060 * Looking for test storage... 00:07:58.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.060 --rc genhtml_branch_coverage=1 00:07:58.060 --rc genhtml_function_coverage=1 00:07:58.060 --rc genhtml_legend=1 00:07:58.060 --rc geninfo_all_blocks=1 00:07:58.060 --rc geninfo_unexecuted_blocks=1 00:07:58.060 00:07:58.060 ' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.060 --rc genhtml_branch_coverage=1 00:07:58.060 --rc genhtml_function_coverage=1 00:07:58.060 --rc genhtml_legend=1 00:07:58.060 --rc geninfo_all_blocks=1 00:07:58.060 --rc geninfo_unexecuted_blocks=1 00:07:58.060 00:07:58.060 ' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.060 --rc genhtml_branch_coverage=1 00:07:58.060 --rc genhtml_function_coverage=1 00:07:58.060 --rc genhtml_legend=1 00:07:58.060 --rc geninfo_all_blocks=1 00:07:58.060 --rc geninfo_unexecuted_blocks=1 00:07:58.060 00:07:58.060 ' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.060 --rc genhtml_branch_coverage=1 00:07:58.060 --rc genhtml_function_coverage=1 00:07:58.060 --rc genhtml_legend=1 00:07:58.060 --rc geninfo_all_blocks=1 00:07:58.060 --rc geninfo_unexecuted_blocks=1 00:07:58.060 00:07:58.060 ' 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.060 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.061 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:06.209 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:06.209 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:06.209 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:06.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.209 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:08:06.210 00:08:06.210 --- 10.0.0.2 ping statistics --- 00:08:06.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.210 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:08:06.210 00:08:06.210 --- 10.0.0.1 ping statistics --- 00:08:06.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.210 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1857343 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1857343 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1857343 ']' 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.210 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.210 [2024-11-20 10:25:37.853592] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:06.210 [2024-11-20 10:25:37.853659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.210 [2024-11-20 10:25:37.958472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.210 [2024-11-20 10:25:38.013272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.210 [2024-11-20 10:25:38.013319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.210 [2024-11-20 10:25:38.013328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.210 [2024-11-20 10:25:38.013335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.210 [2024-11-20 10:25:38.013342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.210 [2024-11-20 10:25:38.015381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.210 [2024-11-20 10:25:38.015545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.210 [2024-11-20 10:25:38.015705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.210 [2024-11-20 10:25:38.015705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 [2024-11-20 10:25:38.723284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 Malloc0 00:08:06.472 [2024-11-20 10:25:38.800892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.472 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1857451 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1857451 /var/tmp/bdevperf.sock 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1857451 ']' 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:06.734 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:06.735 { 00:08:06.735 "params": { 00:08:06.735 "name": "Nvme$subsystem", 00:08:06.735 "trtype": "$TEST_TRANSPORT", 00:08:06.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.735 "adrfam": "ipv4", 00:08:06.735 "trsvcid": "$NVMF_PORT", 00:08:06.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.735 "hdgst": ${hdgst:-false}, 00:08:06.735 "ddgst": ${ddgst:-false} 00:08:06.735 }, 00:08:06.735 "method": "bdev_nvme_attach_controller" 00:08:06.735 } 00:08:06.735 EOF 00:08:06.735 )") 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:06.735 10:25:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:06.735 "params": { 00:08:06.735 "name": "Nvme0", 00:08:06.735 "trtype": "tcp", 00:08:06.735 "traddr": "10.0.0.2", 00:08:06.735 "adrfam": "ipv4", 00:08:06.735 "trsvcid": "4420", 00:08:06.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:06.735 "hdgst": false, 00:08:06.735 "ddgst": false 00:08:06.735 }, 00:08:06.735 "method": "bdev_nvme_attach_controller" 00:08:06.735 }' 00:08:06.735 [2024-11-20 10:25:38.921191] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:06.735 [2024-11-20 10:25:38.921275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857451 ] 00:08:06.735 [2024-11-20 10:25:39.016019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.735 [2024-11-20 10:25:39.070066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.996 Running I/O for 10 seconds... 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=604 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 604 -ge 100 ']' 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.571 [2024-11-20 10:25:39.816779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.816907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11130 is same with the state(6) to be set 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.571 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.571 [2024-11-20 10:25:39.830973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.571 [2024-11-20 10:25:39.831027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.571 [2024-11-20 10:25:39.831048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.571 [2024-11-20 10:25:39.831064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:07.571 [2024-11-20 10:25:39.831080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c9000 is same with the state(6) to be set 00:08:07.571 [2024-11-20 10:25:39.831179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.571 [2024-11-20 10:25:39.831191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.571 [2024-11-20 10:25:39.831217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.571 [2024-11-20 10:25:39.831245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.571 [2024-11-20 10:25:39.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.571 [2024-11-20 10:25:39.831280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.571 [2024-11-20 10:25:39.831289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.572 [2024-11-20 10:25:39.831924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.572 [2024-11-20 10:25:39.831931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.831940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.831948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.831958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.831966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.831976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.831984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.831993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.832305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.573 [2024-11-20 10:25:39.832313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:07.573 [2024-11-20 10:25:39.833593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:07.573 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.573 10:25:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:07.573 task offset: 91264 on job bdev=Nvme0n1 fails 00:08:07.573 00:08:07.573 Latency(us) 00:08:07.573 [2024-11-20T09:25:39.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.573 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:07.573 Job: Nvme0n1 ended in about 0.47 seconds with error 00:08:07.573 Verification LBA range: start 0x0 length 0x400 00:08:07.573 Nvme0n1 : 0.47 1491.85 93.24 135.62 0.00 38225.32 1856.85 36263.25 00:08:07.573 [2024-11-20T09:25:39.949Z] =================================================================================================================== 00:08:07.573 [2024-11-20T09:25:39.949Z] Total : 1491.85 93.24 135.62 0.00 38225.32 1856.85 36263.25 00:08:07.573 [2024-11-20 10:25:39.835844] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.573 [2024-11-20 10:25:39.835879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c9000 (9): Bad file descriptor 00:08:07.834 [2024-11-20 10:25:39.980405] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1857451 00:08:08.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1857451) - No such process 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:08.777 { 00:08:08.777 "params": { 00:08:08.777 "name": "Nvme$subsystem", 00:08:08.777 "trtype": "$TEST_TRANSPORT", 00:08:08.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.777 "adrfam": "ipv4", 00:08:08.777 "trsvcid": "$NVMF_PORT", 00:08:08.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.777 "hdgst": ${hdgst:-false}, 00:08:08.777 "ddgst": ${ddgst:-false} 00:08:08.777 }, 00:08:08.777 "method": "bdev_nvme_attach_controller" 00:08:08.777 } 00:08:08.777 EOF 00:08:08.777 )") 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:08.777 10:25:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:08.777 "params": { 00:08:08.777 "name": "Nvme0", 00:08:08.777 "trtype": "tcp", 00:08:08.777 "traddr": "10.0.0.2", 00:08:08.777 "adrfam": "ipv4", 00:08:08.777 "trsvcid": "4420", 00:08:08.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:08.777 "hdgst": false, 00:08:08.777 "ddgst": false 00:08:08.777 }, 00:08:08.777 "method": "bdev_nvme_attach_controller" 00:08:08.777 }' 00:08:08.777 [2024-11-20 10:25:40.893274] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:08.777 [2024-11-20 10:25:40.893328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857923 ] 00:08:08.777 [2024-11-20 10:25:40.982074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.777 [2024-11-20 10:25:41.016989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.038 Running I/O for 1 seconds... 00:08:10.009 1611.00 IOPS, 100.69 MiB/s 00:08:10.009 Latency(us) 00:08:10.009 [2024-11-20T09:25:42.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.009 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:10.009 Verification LBA range: start 0x0 length 0x400 00:08:10.009 Nvme0n1 : 1.02 1642.49 102.66 0.00 0.00 38198.47 4259.84 32112.64 00:08:10.009 [2024-11-20T09:25:42.385Z] =================================================================================================================== 00:08:10.009 [2024-11-20T09:25:42.385Z] Total : 1642.49 102.66 0.00 0.00 38198.47 4259.84 32112.64 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.270 rmmod nvme_tcp 00:08:10.270 rmmod nvme_fabrics 00:08:10.270 rmmod nvme_keyring 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1857343 ']' 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1857343 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1857343 ']' 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1857343 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857343 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857343' 00:08:10.270 killing process with pid 1857343 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1857343 00:08:10.270 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1857343 00:08:10.531 [2024-11-20 10:25:42.695078] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.531 10:25:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.446 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.446 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:12.446 00:08:12.446 real 0m14.778s 00:08:12.446 user 0m23.758s 00:08:12.446 sys 0m6.813s 00:08:12.446 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.446 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.446 ************************************ 00:08:12.446 END TEST nvmf_host_management 00:08:12.446 ************************************ 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.707 ************************************ 00:08:12.707 START TEST nvmf_lvol 00:08:12.707 ************************************ 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:12.707 * Looking for test storage... 00:08:12.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.707 10:25:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.707 --rc genhtml_branch_coverage=1 00:08:12.707 --rc genhtml_function_coverage=1 00:08:12.707 --rc genhtml_legend=1 00:08:12.707 --rc geninfo_all_blocks=1 00:08:12.707 --rc geninfo_unexecuted_blocks=1 00:08:12.707 00:08:12.707 ' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.707 --rc genhtml_branch_coverage=1 00:08:12.707 --rc genhtml_function_coverage=1 00:08:12.707 --rc genhtml_legend=1 00:08:12.707 --rc geninfo_all_blocks=1 00:08:12.707 --rc geninfo_unexecuted_blocks=1 00:08:12.707 00:08:12.707 ' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.707 --rc genhtml_branch_coverage=1 00:08:12.707 --rc genhtml_function_coverage=1 00:08:12.707 --rc genhtml_legend=1 00:08:12.707 --rc geninfo_all_blocks=1 00:08:12.707 --rc geninfo_unexecuted_blocks=1 00:08:12.707 00:08:12.707 ' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.707 --rc genhtml_branch_coverage=1 00:08:12.707 --rc genhtml_function_coverage=1 00:08:12.707 --rc genhtml_legend=1 00:08:12.707 --rc geninfo_all_blocks=1 00:08:12.707 --rc geninfo_unexecuted_blocks=1 00:08:12.707 00:08:12.707 ' 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.707 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.968 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.969 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.112 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:21.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:21.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:21.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:21.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:08:21.113 00:08:21.113 --- 10.0.0.2 ping statistics --- 00:08:21.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.113 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:08:21.113 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:21.113 00:08:21.113 --- 10.0.0.1 ping statistics --- 00:08:21.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.114 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1862446 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1862446 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1862446 ']' 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.114 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.114 [2024-11-20 10:25:52.649438] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:21.114 [2024-11-20 10:25:52.649501] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.114 [2024-11-20 10:25:52.751909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.114 [2024-11-20 10:25:52.803971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.114 [2024-11-20 10:25:52.804023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.114 [2024-11-20 10:25:52.804031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.114 [2024-11-20 10:25:52.804038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.114 [2024-11-20 10:25:52.804044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.114 [2024-11-20 10:25:52.805930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.114 [2024-11-20 10:25:52.806087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.114 [2024-11-20 10:25:52.806088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.114 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.114 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:21.114 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.114 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.114 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.374 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.374 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.374 [2024-11-20 10:25:53.693858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.374 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.634 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:21.634 10:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.897 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:21.897 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:22.159 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:22.422 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0e34412-f70c-4157-b446-83950301bf70 00:08:22.422 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0e34412-f70c-4157-b446-83950301bf70 lvol 20 00:08:22.422 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1ae7781e-4c7d-46e4-a381-b893cb847719 00:08:22.422 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.685 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ae7781e-4c7d-46e4-a381-b893cb847719 00:08:22.947 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.947 [2024-11-20 10:25:55.311311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.207 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.208 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1863143 00:08:23.208 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:23.208 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:24.593 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1ae7781e-4c7d-46e4-a381-b893cb847719 MY_SNAPSHOT 00:08:24.593 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7a4ff056-11e1-4cb7-bb73-0e7ed5342883 00:08:24.593 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1ae7781e-4c7d-46e4-a381-b893cb847719 30 00:08:24.593 10:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7a4ff056-11e1-4cb7-bb73-0e7ed5342883 MY_CLONE 00:08:24.854 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b0662611-fc4c-4b85-ac85-d4fd9d249cd6 00:08:24.854 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b0662611-fc4c-4b85-ac85-d4fd9d249cd6 00:08:25.426 10:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1863143 00:08:35.425 Initializing NVMe Controllers 00:08:35.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:35.425 Controller IO queue size 128, less than required. 00:08:35.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:35.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:35.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:35.425 Initialization complete. Launching workers. 00:08:35.425 ======================================================== 00:08:35.425 Latency(us) 00:08:35.425 Device Information : IOPS MiB/s Average min max 00:08:35.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16188.00 63.23 7909.00 1504.95 59111.39 00:08:35.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17207.90 67.22 7439.98 425.68 57410.49 00:08:35.426 ======================================================== 00:08:35.426 Total : 33395.90 130.45 7667.33 425.68 59111.39 00:08:35.426 00:08:35.426 10:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ae7781e-4c7d-46e4-a381-b893cb847719 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0e34412-f70c-4157-b446-83950301bf70 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.426 rmmod nvme_tcp 00:08:35.426 rmmod nvme_fabrics 00:08:35.426 rmmod nvme_keyring 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1862446 ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1862446 ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862446' 00:08:35.426 killing process with pid 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1862446 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.426 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.966 00:08:36.966 real 0m24.008s 00:08:36.966 user 1m5.123s 00:08:36.966 sys 0m8.679s 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.966 ************************************ 00:08:36.966 END TEST nvmf_lvol 00:08:36.966 ************************************ 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.966 ************************************ 00:08:36.966 START TEST nvmf_lvs_grow 00:08:36.966 ************************************ 00:08:36.966 10:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:36.966 * Looking for test storage... 00:08:36.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.966 --rc genhtml_branch_coverage=1 00:08:36.966 --rc genhtml_function_coverage=1 00:08:36.966 --rc genhtml_legend=1 00:08:36.966 --rc geninfo_all_blocks=1 00:08:36.966 --rc geninfo_unexecuted_blocks=1 00:08:36.966 00:08:36.966 ' 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.966 --rc genhtml_branch_coverage=1 00:08:36.966 --rc genhtml_function_coverage=1 00:08:36.966 --rc genhtml_legend=1 00:08:36.966 --rc geninfo_all_blocks=1 00:08:36.966 --rc geninfo_unexecuted_blocks=1 00:08:36.966 00:08:36.966 ' 00:08:36.966 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.967 --rc genhtml_branch_coverage=1 00:08:36.967 --rc genhtml_function_coverage=1 00:08:36.967 --rc genhtml_legend=1 00:08:36.967 --rc geninfo_all_blocks=1 00:08:36.967 --rc geninfo_unexecuted_blocks=1 00:08:36.967 00:08:36.967 ' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.967 --rc genhtml_branch_coverage=1 00:08:36.967 --rc genhtml_function_coverage=1 00:08:36.967 --rc genhtml_legend=1 00:08:36.967 --rc geninfo_all_blocks=1 00:08:36.967 --rc geninfo_unexecuted_blocks=1 00:08:36.967 00:08:36.967 ' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:36.967 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:45.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:45.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.113 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:45.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:45.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:08:45.114 00:08:45.114 --- 10.0.0.2 ping statistics --- 00:08:45.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.114 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:08:45.114 00:08:45.114 --- 10.0.0.1 ping statistics --- 00:08:45.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.114 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1869521 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1869521 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1869521 ']' 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.114 10:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:45.114 [2024-11-20 10:26:16.726420] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:45.114 [2024-11-20 10:26:16.726491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.114 [2024-11-20 10:26:16.826979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.114 [2024-11-20 10:26:16.879126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.114 [2024-11-20 10:26:16.879187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.114 [2024-11-20 10:26:16.879196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.114 [2024-11-20 10:26:16.879202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.114 [2024-11-20 10:26:16.879209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.114 [2024-11-20 10:26:16.879959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.376 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.376 [2024-11-20 10:26:17.746106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.638 ************************************ 00:08:45.638 START TEST lvs_grow_clean 00:08:45.638 ************************************ 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.638 10:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.899 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:45.899 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:45.899 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:45.899 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:45.899 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:46.160 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:46.160 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:46.160 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 lvol 150 00:08:46.421 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=87cf3e62-f455-4e64-9159-6c0f7fe17115 00:08:46.421 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.421 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:46.421 [2024-11-20 10:26:18.767133] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:46.421 [2024-11-20 10:26:18.767226] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:46.421 true 00:08:46.421 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:46.421 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:46.683 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:46.683 10:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.944 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 87cf3e62-f455-4e64-9159-6c0f7fe17115 00:08:47.205 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.205 [2024-11-20 10:26:19.505470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.205 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1870229 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1870229 /var/tmp/bdevperf.sock 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1870229 ']' 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.467 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:47.467 [2024-11-20 10:26:19.745310] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:08:47.467 [2024-11-20 10:26:19.745379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1870229 ] 00:08:47.467 [2024-11-20 10:26:19.837748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.729 [2024-11-20 10:26:19.890068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.302 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.302 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:48.302 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:48.564 Nvme0n1 00:08:48.564 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:48.826 [ 00:08:48.826 { 00:08:48.826 "name": "Nvme0n1", 00:08:48.826 "aliases": [ 00:08:48.826 "87cf3e62-f455-4e64-9159-6c0f7fe17115" 00:08:48.826 ], 00:08:48.826 "product_name": "NVMe disk", 00:08:48.826 "block_size": 4096, 00:08:48.826 "num_blocks": 38912, 00:08:48.826 "uuid": "87cf3e62-f455-4e64-9159-6c0f7fe17115", 00:08:48.826 "numa_id": 0, 00:08:48.826 "assigned_rate_limits": { 00:08:48.826 "rw_ios_per_sec": 0, 00:08:48.826 "rw_mbytes_per_sec": 0, 00:08:48.826 "r_mbytes_per_sec": 0, 00:08:48.826 "w_mbytes_per_sec": 0 00:08:48.826 }, 00:08:48.826 "claimed": false, 00:08:48.826 "zoned": false, 00:08:48.826 "supported_io_types": { 00:08:48.826 "read": true, 00:08:48.826 "write": true, 00:08:48.826 "unmap": true, 00:08:48.826 "flush": true, 00:08:48.826 "reset": true, 00:08:48.826 "nvme_admin": true, 00:08:48.826 "nvme_io": true, 00:08:48.826 "nvme_io_md": false, 00:08:48.826 "write_zeroes": true, 00:08:48.826 "zcopy": false, 00:08:48.826 "get_zone_info": false, 00:08:48.826 "zone_management": false, 00:08:48.826 "zone_append": false, 00:08:48.826 "compare": true, 00:08:48.826 "compare_and_write": true, 00:08:48.826 "abort": true, 00:08:48.826 "seek_hole": false, 00:08:48.826 "seek_data": false, 00:08:48.826 "copy": true, 00:08:48.826 "nvme_iov_md": false 00:08:48.826 }, 00:08:48.826 "memory_domains": [ 00:08:48.826 { 00:08:48.826 "dma_device_id": "system", 00:08:48.826 "dma_device_type": 1 00:08:48.826 } 00:08:48.826 ], 00:08:48.826 "driver_specific": { 00:08:48.826 "nvme": [ 00:08:48.826 { 00:08:48.826 "trid": { 00:08:48.826 "trtype": "TCP", 00:08:48.826 "adrfam": "IPv4", 00:08:48.826 "traddr": "10.0.0.2", 00:08:48.826 "trsvcid": "4420", 00:08:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:48.826 }, 00:08:48.826 "ctrlr_data": { 00:08:48.826 "cntlid": 1, 00:08:48.826 "vendor_id": "0x8086", 00:08:48.826 "model_number": "SPDK bdev Controller", 00:08:48.826 "serial_number": "SPDK0", 00:08:48.826 "firmware_revision": "25.01", 00:08:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.826 "oacs": { 00:08:48.826 "security": 0, 00:08:48.826 "format": 0, 00:08:48.826 "firmware": 0, 00:08:48.826 "ns_manage": 0 00:08:48.826 }, 00:08:48.826 "multi_ctrlr": true, 00:08:48.826 "ana_reporting": false 00:08:48.826 }, 00:08:48.826 "vs": { 00:08:48.826 "nvme_version": "1.3" 00:08:48.826 }, 00:08:48.826 "ns_data": { 00:08:48.826 "id": 1, 00:08:48.826 "can_share": true 00:08:48.826 } 00:08:48.826 } 00:08:48.826 ], 00:08:48.826 "mp_policy": "active_passive" 00:08:48.826 } 00:08:48.826 } 00:08:48.826 ] 00:08:48.826 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1870564 00:08:48.826 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:48.826 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.826 Running I/O for 10 seconds... 00:08:50.211 Latency(us) 00:08:50.211 [2024-11-20T09:26:22.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.211 Nvme0n1 : 1.00 25131.00 98.17 0.00 0.00 0.00 0.00 0.00 00:08:50.211 [2024-11-20T09:26:22.587Z] =================================================================================================================== 00:08:50.211 [2024-11-20T09:26:22.587Z] Total : 25131.00 98.17 0.00 0.00 0.00 0.00 0.00 00:08:50.211 00:08:50.783 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:51.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.043 Nvme0n1 : 2.00 25293.00 98.80 0.00 0.00 0.00 0.00 0.00 00:08:51.043 [2024-11-20T09:26:23.419Z] =================================================================================================================== 00:08:51.044 [2024-11-20T09:26:23.420Z] Total : 25293.00 98.80 0.00 0.00 0.00 0.00 0.00 00:08:51.044 00:08:51.044 true 00:08:51.044 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:51.044 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:51.304 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:51.304 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:51.304 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1870564 00:08:51.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.876 Nvme0n1 : 3.00 25353.33 99.04 0.00 0.00 0.00 0.00 0.00 00:08:51.876 [2024-11-20T09:26:24.252Z] =================================================================================================================== 00:08:51.876 [2024-11-20T09:26:24.252Z] Total : 25353.33 99.04 0.00 0.00 0.00 0.00 0.00 00:08:51.876 00:08:52.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.817 Nvme0n1 : 4.00 25414.75 99.28 0.00 0.00 0.00 0.00 0.00 00:08:52.817 [2024-11-20T09:26:25.193Z] =================================================================================================================== 00:08:52.817 [2024-11-20T09:26:25.193Z] Total : 25414.75 99.28 0.00 0.00 0.00 0.00 0.00 00:08:52.817 00:08:54.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.203 Nvme0n1 : 5.00 25452.00 99.42 0.00 0.00 0.00 0.00 0.00 00:08:54.203 [2024-11-20T09:26:26.579Z] =================================================================================================================== 00:08:54.203 [2024-11-20T09:26:26.579Z] Total : 25452.00 99.42 0.00 0.00 0.00 0.00 0.00 00:08:54.203 00:08:55.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.148 Nvme0n1 : 6.00 25487.33 99.56 0.00 0.00 0.00 0.00 0.00 00:08:55.148 [2024-11-20T09:26:27.524Z] =================================================================================================================== 00:08:55.148 [2024-11-20T09:26:27.524Z] Total : 25487.33 99.56 0.00 0.00 0.00 0.00 0.00 00:08:55.148 00:08:56.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.091 Nvme0n1 : 7.00 25512.29 99.66 0.00 0.00 0.00 0.00 0.00 00:08:56.091 [2024-11-20T09:26:28.467Z] =================================================================================================================== 00:08:56.091 [2024-11-20T09:26:28.467Z] Total : 25512.29 99.66 0.00 0.00 0.00 0.00 0.00 00:08:56.091 00:08:57.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.036 Nvme0n1 : 8.00 25531.25 99.73 0.00 0.00 0.00 0.00 0.00 00:08:57.036 [2024-11-20T09:26:29.412Z] =================================================================================================================== 00:08:57.036 [2024-11-20T09:26:29.412Z] Total : 25531.25 99.73 0.00 0.00 0.00 0.00 0.00 00:08:57.036 00:08:57.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.977 Nvme0n1 : 9.00 25538.89 99.76 0.00 0.00 0.00 0.00 0.00 00:08:57.977 [2024-11-20T09:26:30.353Z] =================================================================================================================== 00:08:57.977 [2024-11-20T09:26:30.353Z] Total : 25538.89 99.76 0.00 0.00 0.00 0.00 0.00 00:08:57.977 00:08:58.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.917 Nvme0n1 : 10.00 25551.00 99.81 0.00 0.00 0.00 0.00 0.00 00:08:58.917 [2024-11-20T09:26:31.293Z] =================================================================================================================== 00:08:58.917 [2024-11-20T09:26:31.293Z] Total : 25551.00 99.81 0.00 0.00 0.00 0.00 0.00 00:08:58.917 00:08:58.917 00:08:58.917 Latency(us) 00:08:58.917 [2024-11-20T09:26:31.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.917 Nvme0n1 : 10.00 25551.86 99.81 0.00 0.00 5006.11 2498.56 11195.73 00:08:58.917 [2024-11-20T09:26:31.293Z] =================================================================================================================== 00:08:58.917 [2024-11-20T09:26:31.293Z] Total : 25551.86 99.81 0.00 0.00 5006.11 2498.56 11195.73 00:08:58.917 { 00:08:58.917 "results": [ 00:08:58.917 { 00:08:58.917 "job": "Nvme0n1", 00:08:58.917 "core_mask": "0x2", 00:08:58.917 "workload": "randwrite", 00:08:58.917 "status": "finished", 00:08:58.917 "queue_depth": 128, 00:08:58.917 "io_size": 4096, 00:08:58.917 "runtime": 10.004673, 00:08:58.917 "iops": 25551.859616001442, 00:08:58.917 "mibps": 99.81195162500563, 00:08:58.917 "io_failed": 0, 00:08:58.917 "io_timeout": 0, 00:08:58.917 "avg_latency_us": 5006.113864553262, 00:08:58.917 "min_latency_us": 2498.56, 00:08:58.917 "max_latency_us": 11195.733333333334 00:08:58.917 } 00:08:58.917 ], 00:08:58.917 "core_count": 1 00:08:58.917 } 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1870229 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1870229 ']' 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1870229 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.917 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870229 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870229' 00:08:59.177 killing process with pid 1870229 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1870229 00:08:59.177 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.177 00:08:59.177 Latency(us) 00:08:59.177 [2024-11-20T09:26:31.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.177 [2024-11-20T09:26:31.553Z] =================================================================================================================== 00:08:59.177 [2024-11-20T09:26:31.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1870229 00:08:59.177 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.438 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:59.438 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:59.438 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:59.700 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:59.700 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:59.700 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.700 [2024-11-20 10:26:32.066680] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:59.960 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:08:59.960 request: 00:08:59.960 { 00:08:59.960 "uuid": "6ed978f8-0d2b-4f91-b916-fbad59c09414", 00:08:59.960 "method": "bdev_lvol_get_lvstores", 00:08:59.960 "req_id": 1 00:08:59.960 } 00:08:59.961 Got JSON-RPC error response 00:08:59.961 response: 00:08:59.961 { 00:08:59.961 "code": -19, 00:08:59.961 "message": "No such device" 00:08:59.961 } 00:08:59.961 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:59.961 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:59.961 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:59.961 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:59.961 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.221 aio_bdev 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 87cf3e62-f455-4e64-9159-6c0f7fe17115 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=87cf3e62-f455-4e64-9159-6c0f7fe17115 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.221 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:00.528 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87cf3e62-f455-4e64-9159-6c0f7fe17115 -t 2000 00:09:00.528 [ 00:09:00.528 { 00:09:00.528 "name": "87cf3e62-f455-4e64-9159-6c0f7fe17115", 00:09:00.528 "aliases": [ 00:09:00.528 "lvs/lvol" 00:09:00.528 ], 00:09:00.528 "product_name": "Logical Volume", 00:09:00.528 "block_size": 4096, 00:09:00.528 "num_blocks": 38912, 00:09:00.528 "uuid": "87cf3e62-f455-4e64-9159-6c0f7fe17115", 00:09:00.528 "assigned_rate_limits": { 00:09:00.528 "rw_ios_per_sec": 0, 00:09:00.528 "rw_mbytes_per_sec": 0, 00:09:00.528 "r_mbytes_per_sec": 0, 00:09:00.528 "w_mbytes_per_sec": 0 00:09:00.528 }, 00:09:00.528 "claimed": false, 00:09:00.528 "zoned": false, 00:09:00.528 "supported_io_types": { 00:09:00.528 "read": true, 00:09:00.528 "write": true, 00:09:00.528 "unmap": true, 00:09:00.528 "flush": false, 00:09:00.528 "reset": true, 00:09:00.528 "nvme_admin": false, 00:09:00.528 "nvme_io": false, 00:09:00.528 "nvme_io_md": false, 00:09:00.528 "write_zeroes": true, 00:09:00.528 "zcopy": false, 00:09:00.528 "get_zone_info": false, 00:09:00.528 "zone_management": false, 00:09:00.528 "zone_append": false, 00:09:00.528 "compare": false, 00:09:00.528 "compare_and_write": false, 00:09:00.528 "abort": false, 00:09:00.528 "seek_hole": true, 00:09:00.528 "seek_data": true, 00:09:00.528 "copy": false, 00:09:00.528 "nvme_iov_md": false 00:09:00.528 }, 00:09:00.528 "driver_specific": { 00:09:00.528 "lvol": { 00:09:00.528 "lvol_store_uuid": "6ed978f8-0d2b-4f91-b916-fbad59c09414", 00:09:00.528 "base_bdev": "aio_bdev", 00:09:00.528 "thin_provision": false, 00:09:00.528 "num_allocated_clusters": 38, 00:09:00.528 "snapshot": false, 00:09:00.528 "clone": false, 00:09:00.528 "esnap_clone": false 00:09:00.528 } 00:09:00.528 } 00:09:00.528 } 00:09:00.528 ] 00:09:00.528 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:00.528 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:09:00.528 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:00.789 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:00.789 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:09:00.789 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:00.789 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:00.789 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87cf3e62-f455-4e64-9159-6c0f7fe17115 00:09:01.049 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6ed978f8-0d2b-4f91-b916-fbad59c09414 00:09:01.308 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.308 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.568 00:09:01.568 real 0m15.878s 00:09:01.568 user 0m15.625s 00:09:01.568 sys 0m1.415s 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:01.568 ************************************ 00:09:01.568 END TEST lvs_grow_clean 00:09:01.568 ************************************ 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.568 ************************************ 00:09:01.568 START TEST lvs_grow_dirty 00:09:01.568 ************************************ 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:01.568 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.829 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:01.829 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:01.829 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:01.829 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:01.829 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:02.089 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:02.089 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:02.089 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f lvol 150 00:09:02.350 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8e977d7f-4367-4895-9512-e078356b3151 00:09:02.350 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.350 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.350 [2024-11-20 10:26:34.654829] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.350 [2024-11-20 10:26:34.654870] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.350 true 00:09:02.350 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:02.350 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:02.610 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:02.610 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:02.872 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e977d7f-4367-4895-9512-e078356b3151 00:09:02.872 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:03.133 [2024-11-20 10:26:35.300689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1873330 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1873330 /var/tmp/bdevperf.sock 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1873330 ']' 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.133 10:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.395 [2024-11-20 10:26:35.543636] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:03.395 [2024-11-20 10:26:35.543689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873330 ] 00:09:03.395 [2024-11-20 10:26:35.625518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.395 [2024-11-20 10:26:35.655420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.967 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.967 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:03.967 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:04.538 Nvme0n1 00:09:04.538 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:04.798 [ 00:09:04.798 { 00:09:04.798 "name": "Nvme0n1", 00:09:04.798 "aliases": [ 00:09:04.798 "8e977d7f-4367-4895-9512-e078356b3151" 00:09:04.798 ], 00:09:04.798 "product_name": "NVMe disk", 00:09:04.798 "block_size": 4096, 00:09:04.798 "num_blocks": 38912, 00:09:04.798 "uuid": "8e977d7f-4367-4895-9512-e078356b3151", 00:09:04.798 "numa_id": 0, 00:09:04.798 "assigned_rate_limits": { 00:09:04.798 "rw_ios_per_sec": 0, 00:09:04.798 "rw_mbytes_per_sec": 0, 00:09:04.798 "r_mbytes_per_sec": 0, 00:09:04.798 "w_mbytes_per_sec": 0 00:09:04.798 }, 00:09:04.798 "claimed": false, 00:09:04.798 "zoned": false, 00:09:04.798 "supported_io_types": { 00:09:04.798 "read": true, 00:09:04.798 "write": true, 00:09:04.798 "unmap": true, 00:09:04.798 "flush": true, 00:09:04.798 "reset": true, 00:09:04.798 "nvme_admin": true, 00:09:04.798 "nvme_io": true, 00:09:04.798 "nvme_io_md": false, 00:09:04.798 "write_zeroes": true, 00:09:04.798 "zcopy": false, 00:09:04.798 "get_zone_info": false, 00:09:04.798 "zone_management": false, 00:09:04.798 "zone_append": false, 00:09:04.798 "compare": true, 00:09:04.798 "compare_and_write": true, 00:09:04.798 "abort": true, 00:09:04.798 "seek_hole": false, 00:09:04.798 "seek_data": false, 00:09:04.798 "copy": true, 00:09:04.798 "nvme_iov_md": false 00:09:04.798 }, 00:09:04.798 "memory_domains": [ 00:09:04.798 { 00:09:04.798 "dma_device_id": "system", 00:09:04.798 "dma_device_type": 1 00:09:04.798 } 00:09:04.798 ], 00:09:04.798 "driver_specific": { 00:09:04.798 "nvme": [ 00:09:04.798 { 00:09:04.798 "trid": { 00:09:04.798 "trtype": "TCP", 00:09:04.798 "adrfam": "IPv4", 00:09:04.798 "traddr": "10.0.0.2", 00:09:04.798 "trsvcid": "4420", 00:09:04.798 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:04.798 }, 00:09:04.798 "ctrlr_data": { 00:09:04.798 "cntlid": 1, 00:09:04.798 "vendor_id": "0x8086", 00:09:04.798 "model_number": "SPDK bdev Controller", 00:09:04.798 "serial_number": "SPDK0", 00:09:04.798 "firmware_revision": "25.01", 00:09:04.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.798 "oacs": { 00:09:04.798 "security": 0, 00:09:04.798 "format": 0, 00:09:04.798 "firmware": 0, 00:09:04.798 "ns_manage": 0 00:09:04.798 }, 00:09:04.798 "multi_ctrlr": true, 00:09:04.798 "ana_reporting": false 00:09:04.798 }, 00:09:04.798 "vs": { 00:09:04.798 "nvme_version": "1.3" 00:09:04.798 }, 00:09:04.798 "ns_data": { 00:09:04.798 "id": 1, 00:09:04.798 "can_share": true 00:09:04.798 } 00:09:04.798 } 00:09:04.799 ], 00:09:04.799 "mp_policy": "active_passive" 00:09:04.799 } 00:09:04.799 } 00:09:04.799 ] 00:09:04.799 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1873669 00:09:04.799 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:04.799 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.799 Running I/O for 10 seconds... 00:09:05.741 Latency(us) 00:09:05.741 [2024-11-20T09:26:38.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.741 Nvme0n1 : 1.00 25041.00 97.82 0.00 0.00 0.00 0.00 0.00 00:09:05.741 [2024-11-20T09:26:38.117Z] =================================================================================================================== 00:09:05.741 [2024-11-20T09:26:38.117Z] Total : 25041.00 97.82 0.00 0.00 0.00 0.00 0.00 00:09:05.741 00:09:06.682 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:06.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.682 Nvme0n1 : 2.00 25213.00 98.49 0.00 0.00 0.00 0.00 0.00 00:09:06.682 [2024-11-20T09:26:39.058Z] =================================================================================================================== 00:09:06.682 [2024-11-20T09:26:39.058Z] Total : 25213.00 98.49 0.00 0.00 0.00 0.00 0.00 00:09:06.682 00:09:06.942 true 00:09:06.942 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:06.942 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:06.942 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:06.942 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:06.942 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1873669 00:09:07.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.882 Nvme0n1 : 3.00 25274.33 98.73 0.00 0.00 0.00 0.00 0.00 00:09:07.882 [2024-11-20T09:26:40.258Z] =================================================================================================================== 00:09:07.882 [2024-11-20T09:26:40.258Z] Total : 25274.33 98.73 0.00 0.00 0.00 0.00 0.00 00:09:07.882 00:09:08.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.823 Nvme0n1 : 4.00 25304.50 98.85 0.00 0.00 0.00 0.00 0.00 00:09:08.823 [2024-11-20T09:26:41.199Z] =================================================================================================================== 00:09:08.823 [2024-11-20T09:26:41.199Z] Total : 25304.50 98.85 0.00 0.00 0.00 0.00 0.00 00:09:08.823 00:09:09.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.764 Nvme0n1 : 5.00 25347.80 99.01 0.00 0.00 0.00 0.00 0.00 00:09:09.764 [2024-11-20T09:26:42.140Z] =================================================================================================================== 00:09:09.764 [2024-11-20T09:26:42.140Z] Total : 25347.80 99.01 0.00 0.00 0.00 0.00 0.00 00:09:09.764 00:09:10.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.703 Nvme0n1 : 6.00 25387.17 99.17 0.00 0.00 0.00 0.00 0.00 00:09:10.703 [2024-11-20T09:26:43.079Z] =================================================================================================================== 00:09:10.703 [2024-11-20T09:26:43.079Z] Total : 25387.17 99.17 0.00 0.00 0.00 0.00 0.00 00:09:10.703 00:09:12.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.087 Nvme0n1 : 7.00 25416.00 99.28 0.00 0.00 0.00 0.00 0.00 00:09:12.087 [2024-11-20T09:26:44.463Z] =================================================================================================================== 00:09:12.087 [2024-11-20T09:26:44.463Z] Total : 25416.00 99.28 0.00 0.00 0.00 0.00 0.00 00:09:12.087 00:09:13.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.028 Nvme0n1 : 8.00 25437.25 99.36 0.00 0.00 0.00 0.00 0.00 00:09:13.028 [2024-11-20T09:26:45.404Z] =================================================================================================================== 00:09:13.028 [2024-11-20T09:26:45.404Z] Total : 25437.25 99.36 0.00 0.00 0.00 0.00 0.00 00:09:13.028 00:09:13.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.969 Nvme0n1 : 9.00 25454.11 99.43 0.00 0.00 0.00 0.00 0.00 00:09:13.969 [2024-11-20T09:26:46.345Z] =================================================================================================================== 00:09:13.969 [2024-11-20T09:26:46.345Z] Total : 25454.11 99.43 0.00 0.00 0.00 0.00 0.00 00:09:13.969 00:09:14.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.911 Nvme0n1 : 10.00 25468.30 99.49 0.00 0.00 0.00 0.00 0.00 00:09:14.911 [2024-11-20T09:26:47.287Z] =================================================================================================================== 00:09:14.911 [2024-11-20T09:26:47.287Z] Total : 25468.30 99.49 0.00 0.00 0.00 0.00 0.00 00:09:14.911 00:09:14.911 00:09:14.911 Latency(us) 00:09:14.911 [2024-11-20T09:26:47.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.911 Nvme0n1 : 10.00 25466.03 99.48 0.00 0.00 5023.09 2990.08 10267.31 00:09:14.911 [2024-11-20T09:26:47.287Z] =================================================================================================================== 00:09:14.911 [2024-11-20T09:26:47.287Z] Total : 25466.03 99.48 0.00 0.00 5023.09 2990.08 10267.31 00:09:14.911 { 00:09:14.911 "results": [ 00:09:14.911 { 00:09:14.911 "job": "Nvme0n1", 00:09:14.911 "core_mask": "0x2", 00:09:14.911 "workload": "randwrite", 00:09:14.911 "status": "finished", 00:09:14.911 "queue_depth": 128, 00:09:14.911 "io_size": 4096, 00:09:14.911 "runtime": 10.003366, 00:09:14.911 "iops": 25466.028134929784, 00:09:14.911 "mibps": 99.47667240206947, 00:09:14.911 "io_failed": 0, 00:09:14.911 "io_timeout": 0, 00:09:14.911 "avg_latency_us": 5023.091968732253, 00:09:14.911 "min_latency_us": 2990.08, 00:09:14.911 "max_latency_us": 10267.306666666667 00:09:14.911 } 00:09:14.911 ], 00:09:14.911 "core_count": 1 00:09:14.911 } 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1873330 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1873330 ']' 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1873330 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873330 00:09:14.911 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.912 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.912 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873330' 00:09:14.912 killing process with pid 1873330 00:09:14.912 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1873330 00:09:14.912 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.912 00:09:14.912 Latency(us) 00:09:14.912 [2024-11-20T09:26:47.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.912 [2024-11-20T09:26:47.288Z] =================================================================================================================== 00:09:14.912 [2024-11-20T09:26:47.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.912 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1873330 00:09:14.912 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.173 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1869521 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1869521 00:09:15.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1869521 Killed "${NVMF_APP[@]}" "$@" 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.434 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1875877 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1875877 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1875877 ']' 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.696 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.696 [2024-11-20 10:26:47.857679] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:15.696 [2024-11-20 10:26:47.857735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.696 [2024-11-20 10:26:47.949009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.696 [2024-11-20 10:26:47.980285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.696 [2024-11-20 10:26:47.980312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.696 [2024-11-20 10:26:47.980318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.696 [2024-11-20 10:26:47.980322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.696 [2024-11-20 10:26:47.980327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.696 [2024-11-20 10:26:47.980807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.320 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.320 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:16.320 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.320 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.320 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.580 [2024-11-20 10:26:48.851415] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:16.580 [2024-11-20 10:26:48.851494] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:16.580 [2024-11-20 10:26:48.851517] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8e977d7f-4367-4895-9512-e078356b3151 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8e977d7f-4367-4895-9512-e078356b3151 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.580 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.841 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e977d7f-4367-4895-9512-e078356b3151 -t 2000 00:09:16.841 [ 00:09:16.841 { 00:09:16.841 "name": "8e977d7f-4367-4895-9512-e078356b3151", 00:09:16.841 "aliases": [ 00:09:16.841 "lvs/lvol" 00:09:16.841 ], 00:09:16.841 "product_name": "Logical Volume", 00:09:16.841 "block_size": 4096, 00:09:16.841 "num_blocks": 38912, 00:09:16.841 "uuid": "8e977d7f-4367-4895-9512-e078356b3151", 00:09:16.841 "assigned_rate_limits": { 00:09:16.841 "rw_ios_per_sec": 0, 00:09:16.841 "rw_mbytes_per_sec": 0, 00:09:16.841 "r_mbytes_per_sec": 0, 00:09:16.841 "w_mbytes_per_sec": 0 00:09:16.841 }, 00:09:16.841 "claimed": false, 00:09:16.841 "zoned": false, 00:09:16.841 "supported_io_types": { 00:09:16.841 "read": true, 00:09:16.841 "write": true, 00:09:16.841 "unmap": true, 00:09:16.841 "flush": false, 00:09:16.841 "reset": true, 00:09:16.841 "nvme_admin": false, 00:09:16.841 "nvme_io": false, 00:09:16.841 "nvme_io_md": false, 00:09:16.841 "write_zeroes": true, 00:09:16.841 "zcopy": false, 00:09:16.841 "get_zone_info": false, 00:09:16.841 "zone_management": false, 00:09:16.841 "zone_append": false, 00:09:16.841 "compare": false, 00:09:16.841 "compare_and_write": false, 00:09:16.841 "abort": false, 00:09:16.841 "seek_hole": true, 00:09:16.841 "seek_data": true, 00:09:16.841 "copy": false, 00:09:16.841 "nvme_iov_md": false 00:09:16.841 }, 00:09:16.841 "driver_specific": { 00:09:16.842 "lvol": { 00:09:16.842 "lvol_store_uuid": "21a6dc91-5626-41be-a2f2-ecb2dc020b7f", 00:09:16.842 "base_bdev": "aio_bdev", 00:09:16.842 "thin_provision": false, 00:09:16.842 "num_allocated_clusters": 38, 00:09:16.842 "snapshot": false, 00:09:16.842 "clone": false, 00:09:16.842 "esnap_clone": false 00:09:16.842 } 00:09:16.842 } 00:09:16.842 } 00:09:16.842 ] 00:09:16.842 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:16.842 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:16.842 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:17.102 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:17.102 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:17.102 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.363 [2024-11-20 10:26:49.679992] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:17.363 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:17.624 request: 00:09:17.624 { 00:09:17.624 "uuid": "21a6dc91-5626-41be-a2f2-ecb2dc020b7f", 00:09:17.624 "method": "bdev_lvol_get_lvstores", 00:09:17.624 "req_id": 1 00:09:17.624 } 00:09:17.624 Got JSON-RPC error response 00:09:17.624 response: 00:09:17.624 { 00:09:17.624 "code": -19, 00:09:17.624 "message": "No such device" 00:09:17.624 } 00:09:17.624 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:17.624 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.624 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.624 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.624 10:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.886 aio_bdev 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8e977d7f-4367-4895-9512-e078356b3151 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8e977d7f-4367-4895-9512-e078356b3151 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.886 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.148 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e977d7f-4367-4895-9512-e078356b3151 -t 2000 00:09:18.148 [ 00:09:18.148 { 00:09:18.148 "name": "8e977d7f-4367-4895-9512-e078356b3151", 00:09:18.148 "aliases": [ 00:09:18.148 "lvs/lvol" 00:09:18.148 ], 00:09:18.148 "product_name": "Logical Volume", 00:09:18.148 "block_size": 4096, 00:09:18.148 "num_blocks": 38912, 00:09:18.148 "uuid": "8e977d7f-4367-4895-9512-e078356b3151", 00:09:18.148 "assigned_rate_limits": { 00:09:18.148 "rw_ios_per_sec": 0, 00:09:18.148 "rw_mbytes_per_sec": 0, 00:09:18.148 "r_mbytes_per_sec": 0, 00:09:18.148 "w_mbytes_per_sec": 0 00:09:18.148 }, 00:09:18.148 "claimed": false, 00:09:18.148 "zoned": false, 00:09:18.148 "supported_io_types": { 00:09:18.148 "read": true, 00:09:18.148 "write": true, 00:09:18.148 "unmap": true, 00:09:18.148 "flush": false, 00:09:18.148 "reset": true, 00:09:18.148 "nvme_admin": false, 00:09:18.148 "nvme_io": false, 00:09:18.148 "nvme_io_md": false, 00:09:18.148 "write_zeroes": true, 00:09:18.148 "zcopy": false, 00:09:18.148 "get_zone_info": false, 00:09:18.148 "zone_management": false, 00:09:18.148 "zone_append": false, 00:09:18.148 "compare": false, 00:09:18.148 "compare_and_write": false, 00:09:18.148 "abort": false, 00:09:18.148 "seek_hole": true, 00:09:18.148 "seek_data": true, 00:09:18.148 "copy": false, 00:09:18.148 "nvme_iov_md": false 00:09:18.148 }, 00:09:18.148 "driver_specific": { 00:09:18.148 "lvol": { 00:09:18.148 "lvol_store_uuid": "21a6dc91-5626-41be-a2f2-ecb2dc020b7f", 00:09:18.148 "base_bdev": "aio_bdev", 00:09:18.148 "thin_provision": false, 00:09:18.148 "num_allocated_clusters": 38, 00:09:18.148 "snapshot": false, 00:09:18.148 "clone": false, 00:09:18.148 "esnap_clone": false 00:09:18.148 } 00:09:18.148 } 00:09:18.148 } 00:09:18.148 ] 00:09:18.148 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:18.148 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:18.148 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.409 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.409 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:18.409 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.670 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.670 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e977d7f-4367-4895-9512-e078356b3151 00:09:18.670 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21a6dc91-5626-41be-a2f2-ecb2dc020b7f 00:09:18.930 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.190 00:09:19.190 real 0m17.622s 00:09:19.190 user 0m45.982s 00:09:19.190 sys 0m2.988s 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.190 ************************************ 00:09:19.190 END TEST lvs_grow_dirty 00:09:19.190 ************************************ 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:19.190 nvmf_trace.0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.190 rmmod nvme_tcp 00:09:19.190 rmmod nvme_fabrics 00:09:19.190 rmmod nvme_keyring 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:19.190 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1875877 ']' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1875877 ']' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875877' 00:09:19.451 killing process with pid 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1875877 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.451 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.995 00:09:21.995 real 0m44.843s 00:09:21.995 user 1m8.019s 00:09:21.995 sys 0m10.512s 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.995 ************************************ 00:09:21.995 END TEST nvmf_lvs_grow 00:09:21.995 ************************************ 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.995 ************************************ 00:09:21.995 START TEST nvmf_bdev_io_wait 00:09:21.995 ************************************ 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.995 * Looking for test storage... 00:09:21.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.995 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:21.995 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.996 --rc genhtml_branch_coverage=1 00:09:21.996 --rc genhtml_function_coverage=1 00:09:21.996 --rc genhtml_legend=1 00:09:21.996 --rc geninfo_all_blocks=1 00:09:21.996 --rc geninfo_unexecuted_blocks=1 00:09:21.996 00:09:21.996 ' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.996 --rc genhtml_branch_coverage=1 00:09:21.996 --rc genhtml_function_coverage=1 00:09:21.996 --rc genhtml_legend=1 00:09:21.996 --rc geninfo_all_blocks=1 00:09:21.996 --rc geninfo_unexecuted_blocks=1 00:09:21.996 00:09:21.996 ' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.996 --rc genhtml_branch_coverage=1 00:09:21.996 --rc genhtml_function_coverage=1 00:09:21.996 --rc genhtml_legend=1 00:09:21.996 --rc geninfo_all_blocks=1 00:09:21.996 --rc geninfo_unexecuted_blocks=1 00:09:21.996 00:09:21.996 ' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.996 --rc genhtml_branch_coverage=1 00:09:21.996 --rc genhtml_function_coverage=1 00:09:21.996 --rc genhtml_legend=1 00:09:21.996 --rc geninfo_all_blocks=1 00:09:21.996 --rc geninfo_unexecuted_blocks=1 00:09:21.996 00:09:21.996 ' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.996 10:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:30.273 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:30.273 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:30.273 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:30.273 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.273 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:09:30.274 00:09:30.274 --- 10.0.0.2 ping statistics --- 00:09:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.274 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:30.274 00:09:30.274 --- 10.0.0.1 ping statistics --- 00:09:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.274 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1880989 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1880989 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1880989 ']' 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.274 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 [2024-11-20 10:27:01.660652] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:30.274 [2024-11-20 10:27:01.660717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.274 [2024-11-20 10:27:01.763809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.274 [2024-11-20 10:27:01.820868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.274 [2024-11-20 10:27:01.820923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.274 [2024-11-20 10:27:01.820933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.274 [2024-11-20 10:27:01.820940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.274 [2024-11-20 10:27:01.820946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.274 [2024-11-20 10:27:01.823059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.274 [2024-11-20 10:27:01.823225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.274 [2024-11-20 10:27:01.823307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.274 [2024-11-20 10:27:01.823307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.274 [2024-11-20 10:27:02.614276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.274 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 Malloc0 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 [2024-11-20 10:27:02.679805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1881234 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1881236 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.537 { 00:09:30.537 "params": { 00:09:30.537 "name": "Nvme$subsystem", 00:09:30.537 "trtype": "$TEST_TRANSPORT", 00:09:30.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.537 "adrfam": "ipv4", 00:09:30.537 "trsvcid": "$NVMF_PORT", 00:09:30.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.537 "hdgst": ${hdgst:-false}, 00:09:30.537 "ddgst": ${ddgst:-false} 00:09:30.537 }, 00:09:30.537 "method": "bdev_nvme_attach_controller" 00:09:30.537 } 00:09:30.537 EOF 00:09:30.537 )") 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1881238 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.537 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.537 { 00:09:30.537 "params": { 00:09:30.537 "name": "Nvme$subsystem", 00:09:30.537 "trtype": "$TEST_TRANSPORT", 00:09:30.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.537 "adrfam": "ipv4", 00:09:30.537 "trsvcid": "$NVMF_PORT", 00:09:30.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.537 "hdgst": ${hdgst:-false}, 00:09:30.537 "ddgst": ${ddgst:-false} 00:09:30.537 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 } 00:09:30.538 EOF 00:09:30.538 )") 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1881241 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.538 { 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme$subsystem", 00:09:30.538 "trtype": "$TEST_TRANSPORT", 00:09:30.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "$NVMF_PORT", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.538 "hdgst": ${hdgst:-false}, 00:09:30.538 "ddgst": ${ddgst:-false} 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 } 00:09:30.538 EOF 00:09:30.538 )") 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.538 { 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme$subsystem", 00:09:30.538 "trtype": "$TEST_TRANSPORT", 00:09:30.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "$NVMF_PORT", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.538 "hdgst": ${hdgst:-false}, 00:09:30.538 "ddgst": ${ddgst:-false} 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 } 00:09:30.538 EOF 00:09:30.538 )") 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1881234 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme1", 00:09:30.538 "trtype": "tcp", 00:09:30.538 "traddr": "10.0.0.2", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "4420", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.538 "hdgst": false, 00:09:30.538 "ddgst": false 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 }' 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme1", 00:09:30.538 "trtype": "tcp", 00:09:30.538 "traddr": "10.0.0.2", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "4420", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.538 "hdgst": false, 00:09:30.538 "ddgst": false 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 }' 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme1", 00:09:30.538 "trtype": "tcp", 00:09:30.538 "traddr": "10.0.0.2", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "4420", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.538 "hdgst": false, 00:09:30.538 "ddgst": false 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 }' 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:30.538 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.538 "params": { 00:09:30.538 "name": "Nvme1", 00:09:30.538 "trtype": "tcp", 00:09:30.538 "traddr": "10.0.0.2", 00:09:30.538 "adrfam": "ipv4", 00:09:30.538 "trsvcid": "4420", 00:09:30.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.538 "hdgst": false, 00:09:30.538 "ddgst": false 00:09:30.538 }, 00:09:30.538 "method": "bdev_nvme_attach_controller" 00:09:30.538 }' 00:09:30.538 [2024-11-20 10:27:02.741697] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:30.538 [2024-11-20 10:27:02.741770] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:30.538 [2024-11-20 10:27:02.743201] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:30.538 [2024-11-20 10:27:02.743267] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:30.538 [2024-11-20 10:27:02.743434] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:30.538 [2024-11-20 10:27:02.743499] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:30.538 [2024-11-20 10:27:02.749490] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:30.538 [2024-11-20 10:27:02.749552] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:30.800 [2024-11-20 10:27:02.958740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.800 [2024-11-20 10:27:02.997353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:30.800 [2024-11-20 10:27:03.052753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.800 [2024-11-20 10:27:03.092934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:30.800 [2024-11-20 10:27:03.149213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.060 [2024-11-20 10:27:03.192301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.060 [2024-11-20 10:27:03.219723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.060 [2024-11-20 10:27:03.257454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.060 Running I/O for 1 seconds... 00:09:31.060 Running I/O for 1 seconds... 00:09:31.060 Running I/O for 1 seconds... 00:09:31.060 Running I/O for 1 seconds... 00:09:32.003 9484.00 IOPS, 37.05 MiB/s 00:09:32.003 Latency(us) 00:09:32.003 [2024-11-20T09:27:04.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.003 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:32.003 Nvme1n1 : 1.01 9525.00 37.21 0.00 0.00 13375.67 7482.03 19114.67 00:09:32.003 [2024-11-20T09:27:04.379Z] =================================================================================================================== 00:09:32.003 [2024-11-20T09:27:04.379Z] Total : 9525.00 37.21 0.00 0.00 13375.67 7482.03 19114.67 00:09:32.263 10045.00 IOPS, 39.24 MiB/s 00:09:32.263 Latency(us) 00:09:32.263 [2024-11-20T09:27:04.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.263 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:32.263 Nvme1n1 : 1.01 10125.55 39.55 0.00 0.00 12599.03 5215.57 21626.88 00:09:32.263 [2024-11-20T09:27:04.639Z] =================================================================================================================== 00:09:32.263 [2024-11-20T09:27:04.639Z] Total : 10125.55 39.55 0.00 0.00 12599.03 5215.57 21626.88 00:09:32.263 10093.00 IOPS, 39.43 MiB/s 00:09:32.263 Latency(us) 00:09:32.263 [2024-11-20T09:27:04.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.263 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:32.263 Nvme1n1 : 1.01 10180.15 39.77 0.00 0.00 12532.19 4696.75 24139.09 00:09:32.263 [2024-11-20T09:27:04.639Z] =================================================================================================================== 00:09:32.264 [2024-11-20T09:27:04.640Z] Total : 10180.15 39.77 0.00 0.00 12532.19 4696.75 24139.09 00:09:32.264 180904.00 IOPS, 706.66 MiB/s 00:09:32.264 Latency(us) 00:09:32.264 [2024-11-20T09:27:04.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.264 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:32.264 Nvme1n1 : 1.00 180543.33 705.25 0.00 0.00 705.16 307.20 1979.73 00:09:32.264 [2024-11-20T09:27:04.640Z] =================================================================================================================== 00:09:32.264 [2024-11-20T09:27:04.640Z] Total : 180543.33 705.25 0.00 0.00 705.16 307.20 1979.73 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1881236 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1881238 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1881241 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.264 rmmod nvme_tcp 00:09:32.264 rmmod nvme_fabrics 00:09:32.264 rmmod nvme_keyring 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1880989 ']' 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1880989 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1880989 ']' 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1880989 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.264 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1880989 00:09:32.524 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.524 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.524 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1880989' 00:09:32.524 killing process with pid 1880989 00:09:32.524 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1880989 00:09:32.524 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1880989 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.525 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.068 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.068 00:09:35.068 real 0m13.012s 00:09:35.068 user 0m19.384s 00:09:35.068 sys 0m7.388s 00:09:35.068 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.068 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.068 ************************************ 00:09:35.068 END TEST nvmf_bdev_io_wait 00:09:35.068 ************************************ 00:09:35.069 10:27:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.069 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.069 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.069 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.069 ************************************ 00:09:35.069 START TEST nvmf_queue_depth 00:09:35.069 ************************************ 00:09:35.069 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.069 * Looking for test storage... 00:09:35.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.069 --rc genhtml_branch_coverage=1 00:09:35.069 --rc genhtml_function_coverage=1 00:09:35.069 --rc genhtml_legend=1 00:09:35.069 --rc geninfo_all_blocks=1 00:09:35.069 --rc geninfo_unexecuted_blocks=1 00:09:35.069 00:09:35.069 ' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.069 --rc genhtml_branch_coverage=1 00:09:35.069 --rc genhtml_function_coverage=1 00:09:35.069 --rc genhtml_legend=1 00:09:35.069 --rc geninfo_all_blocks=1 00:09:35.069 --rc geninfo_unexecuted_blocks=1 00:09:35.069 00:09:35.069 ' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.069 --rc genhtml_branch_coverage=1 00:09:35.069 --rc genhtml_function_coverage=1 00:09:35.069 --rc genhtml_legend=1 00:09:35.069 --rc geninfo_all_blocks=1 00:09:35.069 --rc geninfo_unexecuted_blocks=1 00:09:35.069 00:09:35.069 ' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.069 --rc genhtml_branch_coverage=1 00:09:35.069 --rc genhtml_function_coverage=1 00:09:35.069 --rc genhtml_legend=1 00:09:35.069 --rc geninfo_all_blocks=1 00:09:35.069 --rc geninfo_unexecuted_blocks=1 00:09:35.069 00:09:35.069 ' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:35.069 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.070 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:43.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:43.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:43.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.205 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:43.206 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:09:43.206 00:09:43.206 --- 10.0.0.2 ping statistics --- 00:09:43.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.206 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:09:43.206 00:09:43.206 --- 10.0.0.1 ping statistics --- 00:09:43.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.206 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1886397 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1886397 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1886397 ']' 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.206 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.206 [2024-11-20 10:27:14.826732] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:43.206 [2024-11-20 10:27:14.826802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.206 [2024-11-20 10:27:14.929734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.206 [2024-11-20 10:27:14.980340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.206 [2024-11-20 10:27:14.980392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.206 [2024-11-20 10:27:14.980401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.206 [2024-11-20 10:27:14.980409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.206 [2024-11-20 10:27:14.980415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.206 [2024-11-20 10:27:14.981196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 [2024-11-20 10:27:15.687051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 Malloc0 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 [2024-11-20 10:27:15.748433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1886599 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1886599 /var/tmp/bdevperf.sock 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1886599 ']' 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.467 10:27:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.467 [2024-11-20 10:27:15.806777] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:09:43.467 [2024-11-20 10:27:15.806844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886599 ] 00:09:43.728 [2024-11-20 10:27:15.899811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.728 [2024-11-20 10:27:15.953647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.298 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.298 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:44.298 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:44.298 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.298 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.558 NVMe0n1 00:09:44.558 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.558 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:44.819 Running I/O for 10 seconds... 00:09:46.700 8211.00 IOPS, 32.07 MiB/s [2024-11-20T09:27:20.015Z] 9728.50 IOPS, 38.00 MiB/s [2024-11-20T09:27:21.401Z] 10485.33 IOPS, 40.96 MiB/s [2024-11-20T09:27:22.342Z] 11009.75 IOPS, 43.01 MiB/s [2024-11-20T09:27:23.281Z] 11471.60 IOPS, 44.81 MiB/s [2024-11-20T09:27:24.220Z] 11780.50 IOPS, 46.02 MiB/s [2024-11-20T09:27:25.159Z] 12000.00 IOPS, 46.88 MiB/s [2024-11-20T09:27:26.121Z] 12214.25 IOPS, 47.71 MiB/s [2024-11-20T09:27:27.059Z] 12329.89 IOPS, 48.16 MiB/s [2024-11-20T09:27:27.059Z] 12428.70 IOPS, 48.55 MiB/s 00:09:54.683 Latency(us) 00:09:54.683 [2024-11-20T09:27:27.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:54.683 Verification LBA range: start 0x0 length 0x4000 00:09:54.683 NVMe0n1 : 10.05 12470.50 48.71 0.00 0.00 81800.10 10704.21 77332.48 00:09:54.683 [2024-11-20T09:27:27.059Z] =================================================================================================================== 00:09:54.683 [2024-11-20T09:27:27.059Z] Total : 12470.50 48.71 0.00 0.00 81800.10 10704.21 77332.48 00:09:54.683 { 00:09:54.683 "results": [ 00:09:54.683 { 00:09:54.683 "job": "NVMe0n1", 00:09:54.683 "core_mask": "0x1", 00:09:54.683 "workload": "verify", 00:09:54.683 "status": "finished", 00:09:54.683 "verify_range": { 00:09:54.683 "start": 0, 00:09:54.683 "length": 16384 00:09:54.683 }, 00:09:54.683 "queue_depth": 1024, 00:09:54.683 "io_size": 4096, 00:09:54.683 "runtime": 10.04723, 00:09:54.683 "iops": 12470.50181990459, 00:09:54.683 "mibps": 48.712897734002304, 00:09:54.683 "io_failed": 0, 00:09:54.683 "io_timeout": 0, 00:09:54.683 "avg_latency_us": 81800.1029548635, 00:09:54.683 "min_latency_us": 10704.213333333333, 00:09:54.683 "max_latency_us": 77332.48 00:09:54.683 } 00:09:54.683 ], 00:09:54.683 "core_count": 1 00:09:54.683 } 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1886599 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1886599 ']' 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1886599 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1886599 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1886599' 00:09:54.942 killing process with pid 1886599 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1886599 00:09:54.942 Received shutdown signal, test time was about 10.000000 seconds 00:09:54.942 00:09:54.942 Latency(us) 00:09:54.942 [2024-11-20T09:27:27.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.942 [2024-11-20T09:27:27.318Z] =================================================================================================================== 00:09:54.942 [2024-11-20T09:27:27.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1886599 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.942 rmmod nvme_tcp 00:09:54.942 rmmod nvme_fabrics 00:09:54.942 rmmod nvme_keyring 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.942 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1886397 ']' 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1886397 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1886397 ']' 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1886397 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:54.943 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1886397 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1886397' 00:09:55.202 killing process with pid 1886397 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1886397 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1886397 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.202 10:27:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.739 00:09:57.739 real 0m22.574s 00:09:57.739 user 0m25.962s 00:09:57.739 sys 0m7.037s 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.739 ************************************ 00:09:57.739 END TEST nvmf_queue_depth 00:09:57.739 ************************************ 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.739 ************************************ 00:09:57.739 START TEST nvmf_target_multipath 00:09:57.739 ************************************ 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.739 * Looking for test storage... 00:09:57.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.739 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.740 --rc genhtml_branch_coverage=1 00:09:57.740 --rc genhtml_function_coverage=1 00:09:57.740 --rc genhtml_legend=1 00:09:57.740 --rc geninfo_all_blocks=1 00:09:57.740 --rc geninfo_unexecuted_blocks=1 00:09:57.740 00:09:57.740 ' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.740 --rc genhtml_branch_coverage=1 00:09:57.740 --rc genhtml_function_coverage=1 00:09:57.740 --rc genhtml_legend=1 00:09:57.740 --rc geninfo_all_blocks=1 00:09:57.740 --rc geninfo_unexecuted_blocks=1 00:09:57.740 00:09:57.740 ' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.740 --rc genhtml_branch_coverage=1 00:09:57.740 --rc genhtml_function_coverage=1 00:09:57.740 --rc genhtml_legend=1 00:09:57.740 --rc geninfo_all_blocks=1 00:09:57.740 --rc geninfo_unexecuted_blocks=1 00:09:57.740 00:09:57.740 ' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.740 --rc genhtml_branch_coverage=1 00:09:57.740 --rc genhtml_function_coverage=1 00:09:57.740 --rc genhtml_legend=1 00:09:57.740 --rc geninfo_all_blocks=1 00:09:57.740 --rc geninfo_unexecuted_blocks=1 00:09:57.740 00:09:57.740 ' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.740 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.741 10:27:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:05.878 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:05.878 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:05.878 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:05.878 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:10:05.878 00:10:05.878 --- 10.0.0.2 ping statistics --- 00:10:05.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.878 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:10:05.878 00:10:05.878 --- 10.0.0.1 ping statistics --- 00:10:05.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.878 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:05.878 only one NIC for nvmf test 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.878 rmmod nvme_tcp 00:10:05.878 rmmod nvme_fabrics 00:10:05.878 rmmod nvme_keyring 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.878 10:27:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.261 00:10:07.261 real 0m9.920s 00:10:07.261 user 0m2.276s 00:10:07.261 sys 0m5.611s 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.261 ************************************ 00:10:07.261 END TEST nvmf_target_multipath 00:10:07.261 ************************************ 00:10:07.261 10:27:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:07.262 10:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.262 10:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.262 10:27:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.522 ************************************ 00:10:07.522 START TEST nvmf_zcopy 00:10:07.522 ************************************ 00:10:07.522 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:07.522 * Looking for test storage... 00:10:07.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.523 --rc genhtml_branch_coverage=1 00:10:07.523 --rc genhtml_function_coverage=1 00:10:07.523 --rc genhtml_legend=1 00:10:07.523 --rc geninfo_all_blocks=1 00:10:07.523 --rc geninfo_unexecuted_blocks=1 00:10:07.523 00:10:07.523 ' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.523 --rc genhtml_branch_coverage=1 00:10:07.523 --rc genhtml_function_coverage=1 00:10:07.523 --rc genhtml_legend=1 00:10:07.523 --rc geninfo_all_blocks=1 00:10:07.523 --rc geninfo_unexecuted_blocks=1 00:10:07.523 00:10:07.523 ' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.523 --rc genhtml_branch_coverage=1 00:10:07.523 --rc genhtml_function_coverage=1 00:10:07.523 --rc genhtml_legend=1 00:10:07.523 --rc geninfo_all_blocks=1 00:10:07.523 --rc geninfo_unexecuted_blocks=1 00:10:07.523 00:10:07.523 ' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.523 --rc genhtml_branch_coverage=1 00:10:07.523 --rc genhtml_function_coverage=1 00:10:07.523 --rc genhtml_legend=1 00:10:07.523 --rc geninfo_all_blocks=1 00:10:07.523 --rc geninfo_unexecuted_blocks=1 00:10:07.523 00:10:07.523 ' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.523 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.784 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:15.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:15.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:15.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:15.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.935 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:10:15.936 00:10:15.936 --- 10.0.0.2 ping statistics --- 00:10:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.936 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:10:15.936 00:10:15.936 --- 10.0.0.1 ping statistics --- 00:10:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.936 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1897422 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1897422 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1897422 ']' 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.936 10:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.936 [2024-11-20 10:27:47.453493] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:10:15.936 [2024-11-20 10:27:47.453559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.936 [2024-11-20 10:27:47.554961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.936 [2024-11-20 10:27:47.606504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.936 [2024-11-20 10:27:47.606556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.936 [2024-11-20 10:27:47.606571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.936 [2024-11-20 10:27:47.606578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.936 [2024-11-20 10:27:47.606584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.936 [2024-11-20 10:27:47.607417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.936 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.936 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:15.936 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.936 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.936 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 [2024-11-20 10:27:48.330095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 [2024-11-20 10:27:48.354407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 malloc0 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.198 { 00:10:16.198 "params": { 00:10:16.198 "name": "Nvme$subsystem", 00:10:16.198 "trtype": "$TEST_TRANSPORT", 00:10:16.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.198 "adrfam": "ipv4", 00:10:16.198 "trsvcid": "$NVMF_PORT", 00:10:16.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.198 "hdgst": ${hdgst:-false}, 00:10:16.198 "ddgst": ${ddgst:-false} 00:10:16.198 }, 00:10:16.198 "method": "bdev_nvme_attach_controller" 00:10:16.198 } 00:10:16.198 EOF 00:10:16.198 )") 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:16.198 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.198 "params": { 00:10:16.198 "name": "Nvme1", 00:10:16.198 "trtype": "tcp", 00:10:16.198 "traddr": "10.0.0.2", 00:10:16.198 "adrfam": "ipv4", 00:10:16.198 "trsvcid": "4420", 00:10:16.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.198 "hdgst": false, 00:10:16.198 "ddgst": false 00:10:16.198 }, 00:10:16.199 "method": "bdev_nvme_attach_controller" 00:10:16.199 }' 00:10:16.199 [2024-11-20 10:27:48.466747] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:10:16.199 [2024-11-20 10:27:48.466812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897477 ] 00:10:16.199 [2024-11-20 10:27:48.559464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.459 [2024-11-20 10:27:48.612992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.720 Running I/O for 10 seconds... 00:10:18.602 6463.00 IOPS, 50.49 MiB/s [2024-11-20T09:27:52.357Z] 7922.00 IOPS, 61.89 MiB/s [2024-11-20T09:27:53.296Z] 8530.67 IOPS, 66.65 MiB/s [2024-11-20T09:27:54.235Z] 8838.25 IOPS, 69.05 MiB/s [2024-11-20T09:27:55.174Z] 9017.40 IOPS, 70.45 MiB/s [2024-11-20T09:27:56.115Z] 9136.50 IOPS, 71.38 MiB/s [2024-11-20T09:27:57.054Z] 9220.71 IOPS, 72.04 MiB/s [2024-11-20T09:27:57.993Z] 9283.62 IOPS, 72.53 MiB/s [2024-11-20T09:27:59.378Z] 9332.11 IOPS, 72.91 MiB/s [2024-11-20T09:27:59.378Z] 9370.10 IOPS, 73.20 MiB/s 00:10:27.002 Latency(us) 00:10:27.002 [2024-11-20T09:27:59.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:27.002 Verification LBA range: start 0x0 length 0x1000 00:10:27.002 Nvme1n1 : 10.01 9372.54 73.22 0.00 0.00 13611.21 1952.43 28180.48 00:10:27.002 [2024-11-20T09:27:59.378Z] =================================================================================================================== 00:10:27.002 [2024-11-20T09:27:59.378Z] Total : 9372.54 73.22 0.00 0.00 13611.21 1952.43 28180.48 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1899656 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:27.002 { 00:10:27.002 "params": { 00:10:27.002 "name": "Nvme$subsystem", 00:10:27.002 "trtype": "$TEST_TRANSPORT", 00:10:27.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.002 "adrfam": "ipv4", 00:10:27.002 "trsvcid": "$NVMF_PORT", 00:10:27.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.002 "hdgst": ${hdgst:-false}, 00:10:27.002 "ddgst": ${ddgst:-false} 00:10:27.002 }, 00:10:27.002 "method": "bdev_nvme_attach_controller" 00:10:27.002 } 00:10:27.002 EOF 00:10:27.002 )") 00:10:27.002 [2024-11-20 10:27:59.084324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.084353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:27.002 10:27:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:27.002 "params": { 00:10:27.002 "name": "Nvme1", 00:10:27.002 "trtype": "tcp", 00:10:27.002 "traddr": "10.0.0.2", 00:10:27.002 "adrfam": "ipv4", 00:10:27.002 "trsvcid": "4420", 00:10:27.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.002 "hdgst": false, 00:10:27.002 "ddgst": false 00:10:27.002 }, 00:10:27.002 "method": "bdev_nvme_attach_controller" 00:10:27.002 }' 00:10:27.002 [2024-11-20 10:27:59.096318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.096328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.108348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.108357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.120378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.120386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.126765] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:10:27.002 [2024-11-20 10:27:59.126811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899656 ] 00:10:27.002 [2024-11-20 10:27:59.132407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.132415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.144439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.144447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.156470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.156478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.168502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.168510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.180532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.180539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.192563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.192571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.204592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.204600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.208226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.002 [2024-11-20 10:27:59.216624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.216638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.228656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.228664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.237603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.002 [2024-11-20 10:27:59.240686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.240695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.252719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.252729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.264749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.264762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.276777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.276788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.288807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.288817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.300839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.300847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.312885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.312903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.324906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.324915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.336934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.336943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.348962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.348970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.360991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.360999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.002 [2024-11-20 10:27:59.373021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.002 [2024-11-20 10:27:59.373030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.385050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.385061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.397080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.397089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.409117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.409132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 Running I/O for 5 seconds... 00:10:27.290 [2024-11-20 10:27:59.421145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.421152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.436411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.436433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.449844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.449860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.462885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.462900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.476049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.476065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.489805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.489820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.502953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.502968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.516315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.516330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.528863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.528879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.541708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.541724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.290 [2024-11-20 10:27:59.554807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.290 [2024-11-20 10:27:59.554822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.568392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.568408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.581360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.581374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.594618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.594633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.608001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.608016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.621528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.621543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.291 [2024-11-20 10:27:59.634639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.291 [2024-11-20 10:27:59.634654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.647781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.647796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.661416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.661431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.675163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.675178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.687874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.687892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.701041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.701055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.714525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.714539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.727537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.727552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.740474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.740488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.753238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.753253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.766494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.766508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.779513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.779527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.792786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.792801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.806080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.806095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.819488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.819503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.832839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.832854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.846068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.846082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.859591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.859605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.873179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.873194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.886675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.886689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.899453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.899468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.912058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.912072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.924774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.924788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.937166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.937185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.633 [2024-11-20 10:27:59.950591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.633 [2024-11-20 10:27:59.950605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.634 [2024-11-20 10:27:59.963393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:27:59.963407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:27:59.976365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:27:59.976380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:27:59.989699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:27:59.989713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.003148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.003169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.015974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.015989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.028890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.028904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.042575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.042590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.055296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.055311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.068689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.068705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.081523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.970 [2024-11-20 10:28:00.081538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.970 [2024-11-20 10:28:00.094303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.094318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.107737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.107752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.121061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.121076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.134625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.134640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.147979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.147993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.161477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.161491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.173977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.173991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.187015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.187034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.200319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.200334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.213743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.213758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.226898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.226913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.239557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.239572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.251957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.251971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.264987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.265001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.278336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.278350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.291816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.291831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.305194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.305208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.971 [2024-11-20 10:28:00.318625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.971 [2024-11-20 10:28:00.318639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.331947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.331962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.344670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.344684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.357993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.358007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.370869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.370883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.383531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.383546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.395885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.395899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.408885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.408900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 19077.00 IOPS, 149.04 MiB/s [2024-11-20T09:28:00.607Z] [2024-11-20 10:28:00.421604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.421618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.434345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.434361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.447132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.447147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.460346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.460361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.473920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.473935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.487094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.487109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.500541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.500555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.513906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.513920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.527131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.527145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.540721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.540736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.554290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.554305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.568038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.568053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.581516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.581530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.231 [2024-11-20 10:28:00.593928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.231 [2024-11-20 10:28:00.593943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.607285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.607300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.619743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.619758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.633027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.633042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.646330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.646345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.659835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.659850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.673569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.673584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.686469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.686484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.699839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.699854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.713239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.713254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.492 [2024-11-20 10:28:00.726588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.492 [2024-11-20 10:28:00.726602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.739902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.739917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.753184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.753198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.766699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.766714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.780069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.780084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.792735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.792749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.806131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.806146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.819746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.819760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.832591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.832606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.845143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.845164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.493 [2024-11-20 10:28:00.858395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.493 [2024-11-20 10:28:00.858409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.871136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.871150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.884441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.884455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.898131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.898146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.910424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.910438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.923167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.923186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.936493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.936507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.949156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.949175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.961811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.961827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.975329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.975344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:00.988855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:00.988869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.001994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.002009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.015400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.015416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.028761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.028777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.042181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.042196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.055577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.055592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.068551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.068566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.081953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.081968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.094600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.094616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.107996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.108011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.754 [2024-11-20 10:28:01.121343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.754 [2024-11-20 10:28:01.121357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.014 [2024-11-20 10:28:01.134814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.014 [2024-11-20 10:28:01.134829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.147689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.147704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.160713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.160728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.174148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.174171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.187578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.187593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.201095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.201109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.214621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.214635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.227491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.227506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.240991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.241006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.254406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.254420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.267380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.267394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.280622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.280636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.294189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.294203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.307626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.307640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.320663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.320678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.333724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.333739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.347107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.347122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.360631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.360645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.373758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.373772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.015 [2024-11-20 10:28:01.386968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.015 [2024-11-20 10:28:01.386982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.400162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.400177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.413183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.413198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 19199.50 IOPS, 150.00 MiB/s [2024-11-20T09:28:01.651Z] [2024-11-20 10:28:01.426581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.426599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.439929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.439943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.453104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.453119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.466416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.466430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.478842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.478856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.491423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.491437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.504727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.504742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.518082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.518096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.531146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.531164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.543721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.543735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.557081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.557095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.569673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.569688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.275 [2024-11-20 10:28:01.582358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.275 [2024-11-20 10:28:01.582373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.276 [2024-11-20 10:28:01.595036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.276 [2024-11-20 10:28:01.595051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.276 [2024-11-20 10:28:01.607644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.276 [2024-11-20 10:28:01.607659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.276 [2024-11-20 10:28:01.620707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.276 [2024-11-20 10:28:01.620722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.276 [2024-11-20 10:28:01.633804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.276 [2024-11-20 10:28:01.633818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.276 [2024-11-20 10:28:01.646865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.276 [2024-11-20 10:28:01.646880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.660019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.660034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.673553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.673567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.686869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.686883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.699786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.699800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.713075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.713090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.726015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.726030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.739267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.739282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.752277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.752291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.765618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.765633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.778321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.778335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.790776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.790791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.803427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.803443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.816212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.816227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.829780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.829794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.843262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.843277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.856705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.856719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.869927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.869942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.883605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.883619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.535 [2024-11-20 10:28:01.896691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.535 [2024-11-20 10:28:01.896705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.909588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.909603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.923148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.923167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.936653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.936667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.949706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.949720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.962908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.962922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.976226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.976240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:01.989575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:01.989589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.002868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.002883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.015755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.015770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.028329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.028343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.041395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.041409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.054622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.054637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.067939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.067954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.080522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.080537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.092967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.092983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.105649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.105663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.118502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.118518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.131133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.131148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.144327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.144342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.795 [2024-11-20 10:28:02.157429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.795 [2024-11-20 10:28:02.157444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.055 [2024-11-20 10:28:02.171070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.055 [2024-11-20 10:28:02.171085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.055 [2024-11-20 10:28:02.184433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.055 [2024-11-20 10:28:02.184448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.055 [2024-11-20 10:28:02.197132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.055 [2024-11-20 10:28:02.197147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.055 [2024-11-20 10:28:02.209926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.055 [2024-11-20 10:28:02.209940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.222464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.222479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.235848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.235863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.248960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.248974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.261310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.261324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.273816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.273831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.286441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.286455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.299111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.299126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.312050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.312064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.325613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.325629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.339163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.339178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.352518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.352532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.365413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.365428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.377808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.377823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.390705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.390720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.403361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.403376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-11-20 10:28:02.416506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-11-20 10:28:02.416520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 19223.33 IOPS, 150.18 MiB/s [2024-11-20T09:28:02.691Z] [2024-11-20 10:28:02.429973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-11-20 10:28:02.429989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-11-20 10:28:02.443575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.443591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.456117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.456132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.468934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.468949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.482529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.482544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.495968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.495984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.509197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.509211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.522081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.522096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.535080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.535095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.548581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.548596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.561768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.561783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.574431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.574446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.587774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.587789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.600274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.600289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.613776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.613791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.627343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.627359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.640666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.640682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.653633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.653652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.666458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.666473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.316 [2024-11-20 10:28:02.679006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.316 [2024-11-20 10:28:02.679021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.692145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.692165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.704703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.704717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.718410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.718425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.731778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.731793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.744497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.744512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.757210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.757225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.770681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.770696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.783944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.783959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.796561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.796576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.809509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.809523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.822912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.822927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.576 [2024-11-20 10:28:02.836201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.576 [2024-11-20 10:28:02.836215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.849354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.849369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.862957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.862971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.875415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.875429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.888643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.888658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.901685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.901704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.914578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.914592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.927592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.927606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.577 [2024-11-20 10:28:02.941456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.577 [2024-11-20 10:28:02.941470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:02.954547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:02.954561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:02.967721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:02.967735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:02.980473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:02.980487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:02.993559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:02.993573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.006478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.006492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.019825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.019839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.032949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.032963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.046475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.046489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.059556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.059570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.837 [2024-11-20 10:28:03.072888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.837 [2024-11-20 10:28:03.072903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.085547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.085561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.098792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.098806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.111076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.111090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.124059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.124073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.137247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.137261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.150724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.150747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.164277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.164291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.177337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.177352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.190175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.190189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.838 [2024-11-20 10:28:03.202655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.838 [2024-11-20 10:28:03.202669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.098 [2024-11-20 10:28:03.215250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.098 [2024-11-20 10:28:03.215265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.098 [2024-11-20 10:28:03.228617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.098 [2024-11-20 10:28:03.228631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.241963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.241978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.254909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.254924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.268394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.268409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.281799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.281813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.295399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.295414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.308267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.308282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.320539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.320553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.333452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.333467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.347043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.347057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.360299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.360313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.373498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.373513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.387074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.387089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.399905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.399920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.412732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.412746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.425864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.425878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 19235.00 IOPS, 150.27 MiB/s [2024-11-20T09:28:03.475Z] [2024-11-20 10:28:03.438839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.438853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.452181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.452195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.099 [2024-11-20 10:28:03.464850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.099 [2024-11-20 10:28:03.464864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.477635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.477650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.491295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.491309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.504770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.504785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.518328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.518342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.532077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.532091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.544641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.544656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.558086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.558100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.571107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.571122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.584583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.584597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.598032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.598047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.611035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.611049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.624718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.624732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.637980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.637995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.651448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.651463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.664587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.664601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.678201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.678215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.691290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.691305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.705011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.705025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.717660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.717675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.359 [2024-11-20 10:28:03.730499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.359 [2024-11-20 10:28:03.730513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.743277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.743291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.756245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.756259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.769515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.769529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.782911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.782925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.795766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.795780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.808344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.808358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.821277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.821292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.834741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.834756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.847284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.847299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.860588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.860603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.873671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.873686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.887213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.887233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.900361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.900376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.913572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.913586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.926957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.926971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.939772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.939788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.952016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.952031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.965603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.965619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.619 [2024-11-20 10:28:03.978952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.619 [2024-11-20 10:28:03.978968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:03.992388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:03.992404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.004752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.004768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.017452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.017467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.030480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.030495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.042697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.042713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.056306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.056322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.069553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.069569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.879 [2024-11-20 10:28:04.082514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.879 [2024-11-20 10:28:04.082529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.095345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.095360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.108899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.108914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.121537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.121552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.134765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.134789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.147598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.147613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.160886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.160901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.174244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.174259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.187832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.187847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.201039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.201054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.214050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.214066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.227006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.227022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.880 [2024-11-20 10:28:04.240593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.880 [2024-11-20 10:28:04.240608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.254010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.254025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.266881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.266897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.280107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.280122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.293930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.293946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.307389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.307404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.320628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.320644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.333992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.334008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.346502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.346517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.360059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.360074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.373497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.373512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.387059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.387079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.400527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.400542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.413788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.413803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.426684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.426700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 19237.80 IOPS, 150.30 MiB/s 00:10:32.140 Latency(us) 00:10:32.140 [2024-11-20T09:28:04.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.140 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:32.140 Nvme1n1 : 5.00 19245.63 150.36 0.00 0.00 6646.14 2785.28 14745.60 00:10:32.140 [2024-11-20T09:28:04.516Z] =================================================================================================================== 00:10:32.140 [2024-11-20T09:28:04.516Z] Total : 19245.63 150.36 0.00 0.00 6646.14 2785.28 14745.60 00:10:32.140 [2024-11-20 10:28:04.436475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.436490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.448501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.448513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.460537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.460551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.472564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.472576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.484591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.484601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.496621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.496630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.140 [2024-11-20 10:28:04.508651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.140 [2024-11-20 10:28:04.508663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.400 [2024-11-20 10:28:04.520681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.400 [2024-11-20 10:28:04.520692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.400 [2024-11-20 10:28:04.532711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.400 [2024-11-20 10:28:04.532719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1899656) - No such process 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1899656 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.400 delay0 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:32.400 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.401 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:32.401 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.401 10:28:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:32.401 [2024-11-20 10:28:04.705665] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:40.530 Initializing NVMe Controllers 00:10:40.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.530 Initialization complete. Launching workers. 00:10:40.530 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 32657 00:10:40.530 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32777, failed to submit 118 00:10:40.530 success 32685, unsuccessful 92, failed 0 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.530 rmmod nvme_tcp 00:10:40.530 rmmod nvme_fabrics 00:10:40.530 rmmod nvme_keyring 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1897422 ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1897422 ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1897422' 00:10:40.530 killing process with pid 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1897422 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.530 10:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.914 00:10:41.914 real 0m34.404s 00:10:41.914 user 0m45.233s 00:10:41.914 sys 0m11.869s 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.914 ************************************ 00:10:41.914 END TEST nvmf_zcopy 00:10:41.914 ************************************ 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.914 ************************************ 00:10:41.914 START TEST nvmf_nmic 00:10:41.914 ************************************ 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.914 * Looking for test storage... 00:10:41.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.914 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.175 --rc genhtml_branch_coverage=1 00:10:42.175 --rc genhtml_function_coverage=1 00:10:42.175 --rc genhtml_legend=1 00:10:42.175 --rc geninfo_all_blocks=1 00:10:42.175 --rc geninfo_unexecuted_blocks=1 00:10:42.175 00:10:42.175 ' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.175 --rc genhtml_branch_coverage=1 00:10:42.175 --rc genhtml_function_coverage=1 00:10:42.175 --rc genhtml_legend=1 00:10:42.175 --rc geninfo_all_blocks=1 00:10:42.175 --rc geninfo_unexecuted_blocks=1 00:10:42.175 00:10:42.175 ' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.175 --rc genhtml_branch_coverage=1 00:10:42.175 --rc genhtml_function_coverage=1 00:10:42.175 --rc genhtml_legend=1 00:10:42.175 --rc geninfo_all_blocks=1 00:10:42.175 --rc geninfo_unexecuted_blocks=1 00:10:42.175 00:10:42.175 ' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.175 --rc genhtml_branch_coverage=1 00:10:42.175 --rc genhtml_function_coverage=1 00:10:42.175 --rc genhtml_legend=1 00:10:42.175 --rc geninfo_all_blocks=1 00:10:42.175 --rc geninfo_unexecuted_blocks=1 00:10:42.175 00:10:42.175 ' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.175 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.176 10:28:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:50.317 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:50.317 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:50.317 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:50.317 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.317 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:10:50.318 00:10:50.318 --- 10.0.0.2 ping statistics --- 00:10:50.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.318 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:10:50.318 00:10:50.318 --- 10.0.0.1 ping statistics --- 00:10:50.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.318 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1906499 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1906499 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1906499 ']' 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.318 10:28:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.318 [2024-11-20 10:28:21.925692] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:10:50.318 [2024-11-20 10:28:21.925758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.318 [2024-11-20 10:28:22.028034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.318 [2024-11-20 10:28:22.083120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.318 [2024-11-20 10:28:22.083190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.318 [2024-11-20 10:28:22.083200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.318 [2024-11-20 10:28:22.083208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.318 [2024-11-20 10:28:22.083214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.318 [2024-11-20 10:28:22.085638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.318 [2024-11-20 10:28:22.085804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.318 [2024-11-20 10:28:22.085968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.318 [2024-11-20 10:28:22.085967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 [2024-11-20 10:28:22.810439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 Malloc0 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.578 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.579 [2024-11-20 10:28:22.887321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:50.579 test case1: single bdev can't be used in multiple subsystems 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.579 [2024-11-20 10:28:22.923139] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:50.579 [2024-11-20 10:28:22.923174] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:50.579 [2024-11-20 10:28:22.923184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.579 request: 00:10:50.579 { 00:10:50.579 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:50.579 "namespace": { 00:10:50.579 "bdev_name": "Malloc0", 00:10:50.579 "no_auto_visible": false 00:10:50.579 }, 00:10:50.579 "method": "nvmf_subsystem_add_ns", 00:10:50.579 "req_id": 1 00:10:50.579 } 00:10:50.579 Got JSON-RPC error response 00:10:50.579 response: 00:10:50.579 { 00:10:50.579 "code": -32602, 00:10:50.579 "message": "Invalid parameters" 00:10:50.579 } 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:50.579 Adding namespace failed - expected result. 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:50.579 test case2: host connect to nvmf target in multiple paths 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:50.579 [2024-11-20 10:28:22.935331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.579 10:28:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.489 10:28:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:53.871 10:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.871 10:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.871 10:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.871 10:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:53.871 10:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:55.782 10:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:55.782 [global] 00:10:55.782 thread=1 00:10:55.782 invalidate=1 00:10:55.782 rw=write 00:10:55.782 time_based=1 00:10:55.782 runtime=1 00:10:55.782 ioengine=libaio 00:10:55.782 direct=1 00:10:55.782 bs=4096 00:10:55.782 iodepth=1 00:10:55.782 norandommap=0 00:10:55.782 numjobs=1 00:10:55.782 00:10:55.782 verify_dump=1 00:10:55.782 verify_backlog=512 00:10:55.782 verify_state_save=0 00:10:55.782 do_verify=1 00:10:55.782 verify=crc32c-intel 00:10:55.782 [job0] 00:10:55.782 filename=/dev/nvme0n1 00:10:55.782 Could not set queue depth (nvme0n1) 00:10:56.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.350 fio-3.35 00:10:56.350 Starting 1 thread 00:10:57.295 00:10:57.295 job0: (groupid=0, jobs=1): err= 0: pid=1907863: Wed Nov 20 10:28:29 2024 00:10:57.295 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:10:57.295 slat (nsec): min=8097, max=26972, avg=24980.00, stdev=4359.77 00:10:57.295 clat (usec): min=898, max=42048, avg=39521.39, stdev=9953.64 00:10:57.295 lat (usec): min=924, max=42074, avg=39546.37, stdev=9953.40 00:10:57.295 clat percentiles (usec): 00:10:57.295 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[41681], 20.00th=[41681], 00:10:57.295 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:57.295 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:57.295 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:57.295 | 99.99th=[42206] 00:10:57.295 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:57.295 slat (usec): min=10, max=25412, avg=78.84, stdev=1121.86 00:10:57.295 clat (usec): min=242, max=798, avg=590.81, stdev=96.61 00:10:57.295 lat (usec): min=256, max=26135, avg=669.65, stdev=1132.21 00:10:57.295 clat percentiles (usec): 00:10:57.295 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 465], 20.00th=[ 506], 00:10:57.295 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:10:57.295 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 734], 00:10:57.295 | 99.00th=[ 758], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 799], 00:10:57.295 | 99.99th=[ 799] 00:10:57.295 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:57.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:57.295 lat (usec) : 250=0.19%, 500=17.01%, 750=78.07%, 1000=1.70% 00:10:57.295 lat (msec) : 50=3.02% 00:10:57.295 cpu : usr=0.79%, sys=1.28%, ctx=533, majf=0, minf=1 00:10:57.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.295 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.295 00:10:57.295 Run status group 0 (all jobs): 00:10:57.295 READ: bw=66.7KiB/s (68.3kB/s), 66.7KiB/s-66.7KiB/s (68.3kB/s-68.3kB/s), io=68.0KiB (69.6kB), run=1019-1019msec 00:10:57.295 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:10:57.295 00:10:57.295 Disk stats (read/write): 00:10:57.295 nvme0n1: ios=39/512, merge=0/0, ticks=1511/285, in_queue=1796, util=98.70% 00:10:57.295 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.554 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.555 rmmod nvme_tcp 00:10:57.555 rmmod nvme_fabrics 00:10:57.555 rmmod nvme_keyring 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1906499 ']' 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1906499 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1906499 ']' 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1906499 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906499 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906499' 00:10:57.555 killing process with pid 1906499 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1906499 00:10:57.555 10:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1906499 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.814 10:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.357 00:11:00.357 real 0m17.958s 00:11:00.357 user 0m49.469s 00:11:00.357 sys 0m6.578s 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.357 ************************************ 00:11:00.357 END TEST nvmf_nmic 00:11:00.357 ************************************ 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.357 ************************************ 00:11:00.357 START TEST nvmf_fio_target 00:11:00.357 ************************************ 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:00.357 * Looking for test storage... 00:11:00.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.357 --rc genhtml_branch_coverage=1 00:11:00.357 --rc genhtml_function_coverage=1 00:11:00.357 --rc genhtml_legend=1 00:11:00.357 --rc geninfo_all_blocks=1 00:11:00.357 --rc geninfo_unexecuted_blocks=1 00:11:00.357 00:11:00.357 ' 00:11:00.357 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.357 --rc genhtml_branch_coverage=1 00:11:00.358 --rc genhtml_function_coverage=1 00:11:00.358 --rc genhtml_legend=1 00:11:00.358 --rc geninfo_all_blocks=1 00:11:00.358 --rc geninfo_unexecuted_blocks=1 00:11:00.358 00:11:00.358 ' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.358 --rc genhtml_branch_coverage=1 00:11:00.358 --rc genhtml_function_coverage=1 00:11:00.358 --rc genhtml_legend=1 00:11:00.358 --rc geninfo_all_blocks=1 00:11:00.358 --rc geninfo_unexecuted_blocks=1 00:11:00.358 00:11:00.358 ' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.358 --rc genhtml_branch_coverage=1 00:11:00.358 --rc genhtml_function_coverage=1 00:11:00.358 --rc genhtml_legend=1 00:11:00.358 --rc geninfo_all_blocks=1 00:11:00.358 --rc geninfo_unexecuted_blocks=1 00:11:00.358 00:11:00.358 ' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.358 10:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.497 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.498 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:11:08.498 00:11:08.498 --- 10.0.0.2 ping statistics --- 00:11:08.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.498 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:08.498 00:11:08.498 --- 10.0.0.1 ping statistics --- 00:11:08.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.498 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1912416 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1912416 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1912416 ']' 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.498 10:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.498 [2024-11-20 10:28:40.015002] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:11:08.498 [2024-11-20 10:28:40.015067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.498 [2024-11-20 10:28:40.117340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.499 [2024-11-20 10:28:40.170297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.499 [2024-11-20 10:28:40.170373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.499 [2024-11-20 10:28:40.170381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.499 [2024-11-20 10:28:40.170390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.499 [2024-11-20 10:28:40.170400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.499 [2024-11-20 10:28:40.172389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.499 [2024-11-20 10:28:40.172558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.499 [2024-11-20 10:28:40.172718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.499 [2024-11-20 10:28:40.172717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.499 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.499 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:08.499 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.499 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.499 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.759 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.759 10:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:08.759 [2024-11-20 10:28:41.065606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.759 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.019 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:09.019 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.280 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:09.280 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.540 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:09.541 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.801 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:09.801 10:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:09.801 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.062 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:10.062 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.322 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:10.322 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.582 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:10.582 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:10.582 10:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.841 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:10.841 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.101 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:11.101 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.101 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.361 [2024-11-20 10:28:43.606686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.361 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:11.621 10:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:11.881 10:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:13.265 10:28:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:15.177 10:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:15.177 [global] 00:11:15.177 thread=1 00:11:15.177 invalidate=1 00:11:15.177 rw=write 00:11:15.177 time_based=1 00:11:15.177 runtime=1 00:11:15.177 ioengine=libaio 00:11:15.177 direct=1 00:11:15.177 bs=4096 00:11:15.177 iodepth=1 00:11:15.177 norandommap=0 00:11:15.177 numjobs=1 00:11:15.177 00:11:15.177 verify_dump=1 00:11:15.177 verify_backlog=512 00:11:15.177 verify_state_save=0 00:11:15.177 do_verify=1 00:11:15.177 verify=crc32c-intel 00:11:15.177 [job0] 00:11:15.177 filename=/dev/nvme0n1 00:11:15.461 [job1] 00:11:15.461 filename=/dev/nvme0n2 00:11:15.461 [job2] 00:11:15.461 filename=/dev/nvme0n3 00:11:15.461 [job3] 00:11:15.461 filename=/dev/nvme0n4 00:11:15.461 Could not set queue depth (nvme0n1) 00:11:15.461 Could not set queue depth (nvme0n2) 00:11:15.461 Could not set queue depth (nvme0n3) 00:11:15.461 Could not set queue depth (nvme0n4) 00:11:15.725 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.725 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.725 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.725 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.725 fio-3.35 00:11:15.725 Starting 4 threads 00:11:17.108 00:11:17.108 job0: (groupid=0, jobs=1): err= 0: pid=1914233: Wed Nov 20 10:28:49 2024 00:11:17.108 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:17.108 slat (nsec): min=6482, max=55402, avg=22094.61, stdev=7860.92 00:11:17.108 clat (usec): min=142, max=1467, avg=642.65, stdev=346.48 00:11:17.108 lat (usec): min=167, max=1492, avg=664.75, stdev=350.29 00:11:17.108 clat percentiles (usec): 00:11:17.108 | 1.00th=[ 182], 5.00th=[ 225], 10.00th=[ 241], 20.00th=[ 302], 00:11:17.108 | 30.00th=[ 343], 40.00th=[ 379], 50.00th=[ 469], 60.00th=[ 857], 00:11:17.108 | 70.00th=[ 947], 80.00th=[ 1012], 90.00th=[ 1090], 95.00th=[ 1139], 00:11:17.108 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1467], 00:11:17.108 | 99.99th=[ 1467] 00:11:17.108 write: IOPS=1251, BW=5007KiB/s (5127kB/s)(5012KiB/1001msec); 0 zone resets 00:11:17.108 slat (nsec): min=9232, max=66434, avg=18135.59, stdev=10886.23 00:11:17.108 clat (usec): min=82, max=892, avg=226.11, stdev=185.48 00:11:17.108 lat (usec): min=92, max=924, avg=244.25, stdev=192.86 00:11:17.108 clat percentiles (usec): 00:11:17.108 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 102], 00:11:17.108 | 30.00th=[ 106], 40.00th=[ 113], 50.00th=[ 119], 60.00th=[ 186], 00:11:17.108 | 70.00th=[ 239], 80.00th=[ 306], 90.00th=[ 578], 95.00th=[ 660], 00:11:17.108 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 824], 99.95th=[ 889], 00:11:17.108 | 99.99th=[ 889] 00:11:17.108 bw ( KiB/s): min= 8192, max= 8192, per=55.59%, avg=8192.00, stdev= 0.00, samples=1 00:11:17.108 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:17.108 lat (usec) : 100=9.22%, 250=37.20%, 500=23.28%, 750=8.96%, 1000=11.42% 00:11:17.108 lat (msec) : 2=9.93% 00:11:17.108 cpu : usr=1.90%, sys=5.50%, ctx=2277, majf=0, minf=1 00:11:17.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.108 issued rwts: total=1024,1253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.108 job1: (groupid=0, jobs=1): err= 0: pid=1914266: Wed Nov 20 10:28:49 2024 00:11:17.108 read: IOPS=498, BW=1994KiB/s (2042kB/s)(2064KiB/1035msec) 00:11:17.108 slat (nsec): min=7090, max=46083, avg=24293.01, stdev=7723.16 00:11:17.108 clat (usec): min=259, max=41999, avg=1161.33, stdev=4412.24 00:11:17.108 lat (usec): min=286, max=42026, avg=1185.63, stdev=4412.30 00:11:17.108 clat percentiles (usec): 00:11:17.108 | 1.00th=[ 347], 5.00th=[ 453], 10.00th=[ 506], 20.00th=[ 545], 00:11:17.108 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:11:17.108 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 840], 00:11:17.108 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:17.108 | 99.99th=[42206] 00:11:17.108 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:11:17.108 slat (usec): min=9, max=32930, avg=61.15, stdev=1028.22 00:11:17.108 clat (usec): min=92, max=654, avg=340.92, stdev=115.66 00:11:17.108 lat (usec): min=104, max=33297, avg=402.07, stdev=1035.95 00:11:17.108 clat percentiles (usec): 00:11:17.108 | 1.00th=[ 103], 5.00th=[ 119], 10.00th=[ 153], 20.00th=[ 251], 00:11:17.108 | 30.00th=[ 285], 40.00th=[ 322], 50.00th=[ 351], 60.00th=[ 371], 00:11:17.108 | 70.00th=[ 412], 80.00th=[ 449], 90.00th=[ 482], 95.00th=[ 510], 00:11:17.108 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 635], 99.95th=[ 652], 00:11:17.108 | 99.99th=[ 652] 00:11:17.108 bw ( KiB/s): min= 4096, max= 4096, per=27.80%, avg=4096.00, stdev= 0.00, samples=2 00:11:17.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:17.108 lat (usec) : 100=0.39%, 250=12.73%, 500=52.27%, 750=24.61%, 1000=9.55% 00:11:17.108 lat (msec) : 20=0.06%, 50=0.39% 00:11:17.109 cpu : usr=2.22%, sys=3.97%, ctx=1542, majf=0, minf=1 00:11:17.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.109 job2: (groupid=0, jobs=1): err= 0: pid=1914303: Wed Nov 20 10:28:49 2024 00:11:17.109 read: IOPS=359, BW=1437KiB/s (1472kB/s)(1440KiB/1002msec) 00:11:17.109 slat (nsec): min=3685, max=38675, avg=6766.30, stdev=4148.80 00:11:17.109 clat (usec): min=392, max=42998, avg=2112.50, stdev=7211.47 00:11:17.109 lat (usec): min=398, max=43025, avg=2119.27, stdev=7215.19 00:11:17.109 clat percentiles (usec): 00:11:17.109 | 1.00th=[ 545], 5.00th=[ 644], 10.00th=[ 685], 20.00th=[ 725], 00:11:17.109 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:11:17.109 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 938], 00:11:17.109 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:17.109 | 99.99th=[43254] 00:11:17.109 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:17.109 slat (nsec): min=4761, max=32786, avg=7268.37, stdev=1447.36 00:11:17.109 clat (usec): min=242, max=685, avg=454.75, stdev=79.44 00:11:17.109 lat (usec): min=253, max=692, avg=462.02, stdev=79.57 00:11:17.109 clat percentiles (usec): 00:11:17.109 | 1.00th=[ 262], 5.00th=[ 302], 10.00th=[ 343], 20.00th=[ 388], 00:11:17.109 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 486], 00:11:17.109 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 570], 00:11:17.109 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 685], 99.95th=[ 685], 00:11:17.109 | 99.99th=[ 685] 00:11:17.109 bw ( KiB/s): min= 4096, max= 4096, per=27.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:17.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:17.109 lat (usec) : 250=0.34%, 500=41.06%, 750=28.78%, 1000=28.21% 00:11:17.109 lat (msec) : 2=0.23%, 50=1.38% 00:11:17.109 cpu : usr=0.40%, sys=0.40%, ctx=876, majf=0, minf=1 00:11:17.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 issued rwts: total=360,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.109 job3: (groupid=0, jobs=1): err= 0: pid=1914316: Wed Nov 20 10:28:49 2024 00:11:17.109 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2064KiB/1009msec) 00:11:17.109 slat (nsec): min=7227, max=43891, avg=23857.05, stdev=7548.81 00:11:17.109 clat (usec): min=347, max=42237, avg=997.26, stdev=3145.36 00:11:17.109 lat (usec): min=355, max=42264, avg=1021.12, stdev=3145.66 00:11:17.109 clat percentiles (usec): 00:11:17.109 | 1.00th=[ 529], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 685], 00:11:17.109 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:11:17.109 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 906], 00:11:17.109 | 99.00th=[ 1074], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:17.109 | 99.99th=[42206] 00:11:17.109 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:17.109 slat (nsec): min=9584, max=64100, avg=30312.48, stdev=9326.83 00:11:17.109 clat (usec): min=146, max=800, avg=429.81, stdev=91.46 00:11:17.109 lat (usec): min=179, max=835, avg=460.13, stdev=94.88 00:11:17.109 clat percentiles (usec): 00:11:17.109 | 1.00th=[ 255], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 347], 00:11:17.109 | 30.00th=[ 375], 40.00th=[ 404], 50.00th=[ 437], 60.00th=[ 461], 00:11:17.109 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:11:17.109 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 758], 99.95th=[ 799], 00:11:17.109 | 99.99th=[ 799] 00:11:17.109 bw ( KiB/s): min= 4096, max= 4096, per=27.80%, avg=4096.00, stdev= 0.00, samples=2 00:11:17.109 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:17.109 lat (usec) : 250=0.58%, 500=52.01%, 750=28.90%, 1000=18.05% 00:11:17.109 lat (msec) : 2=0.26%, 50=0.19% 00:11:17.109 cpu : usr=2.58%, sys=4.07%, ctx=1540, majf=0, minf=1 00:11:17.109 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.109 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.109 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.109 00:11:17.109 Run status group 0 (all jobs): 00:11:17.109 READ: bw=9337KiB/s (9561kB/s), 1437KiB/s-4092KiB/s (1472kB/s-4190kB/s), io=9664KiB (9896kB), run=1001-1035msec 00:11:17.109 WRITE: bw=14.4MiB/s (15.1MB/s), 2044KiB/s-5007KiB/s (2093kB/s-5127kB/s), io=14.9MiB (15.6MB), run=1001-1035msec 00:11:17.109 00:11:17.109 Disk stats (read/write): 00:11:17.109 nvme0n1: ios=992/1024, merge=0/0, ticks=681/146, in_queue=827, util=84.57% 00:11:17.109 nvme0n2: ios=543/1024, merge=0/0, ticks=1686/331, in_queue=2017, util=91.91% 00:11:17.109 nvme0n3: ios=414/512, merge=0/0, ticks=1069/230, in_queue=1299, util=95.87% 00:11:17.109 nvme0n4: ios=569/810, merge=0/0, ticks=490/321, in_queue=811, util=93.00% 00:11:17.109 10:28:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:17.109 [global] 00:11:17.109 thread=1 00:11:17.109 invalidate=1 00:11:17.109 rw=randwrite 00:11:17.109 time_based=1 00:11:17.109 runtime=1 00:11:17.109 ioengine=libaio 00:11:17.109 direct=1 00:11:17.109 bs=4096 00:11:17.109 iodepth=1 00:11:17.109 norandommap=0 00:11:17.109 numjobs=1 00:11:17.109 00:11:17.109 verify_dump=1 00:11:17.109 verify_backlog=512 00:11:17.109 verify_state_save=0 00:11:17.109 do_verify=1 00:11:17.109 verify=crc32c-intel 00:11:17.109 [job0] 00:11:17.109 filename=/dev/nvme0n1 00:11:17.109 [job1] 00:11:17.109 filename=/dev/nvme0n2 00:11:17.109 [job2] 00:11:17.109 filename=/dev/nvme0n3 00:11:17.109 [job3] 00:11:17.109 filename=/dev/nvme0n4 00:11:17.109 Could not set queue depth (nvme0n1) 00:11:17.109 Could not set queue depth (nvme0n2) 00:11:17.109 Could not set queue depth (nvme0n3) 00:11:17.109 Could not set queue depth (nvme0n4) 00:11:17.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.369 fio-3.35 00:11:17.369 Starting 4 threads 00:11:18.751 00:11:18.751 job0: (groupid=0, jobs=1): err= 0: pid=1914772: Wed Nov 20 10:28:50 2024 00:11:18.751 read: IOPS=19, BW=77.8KiB/s (79.7kB/s)(80.0KiB/1028msec) 00:11:18.751 slat (nsec): min=11808, max=28351, avg=26802.40, stdev=3855.26 00:11:18.751 clat (usec): min=524, max=42920, avg=37588.45, stdev=12615.96 00:11:18.751 lat (usec): min=536, max=42948, avg=37615.26, stdev=12619.51 00:11:18.751 clat percentiles (usec): 00:11:18.751 | 1.00th=[ 529], 5.00th=[ 529], 10.00th=[ 938], 20.00th=[41157], 00:11:18.751 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:18.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:18.752 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:18.752 | 99.99th=[42730] 00:11:18.752 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:18.752 slat (nsec): min=6509, max=62687, avg=29530.34, stdev=11411.74 00:11:18.752 clat (usec): min=110, max=1985, avg=500.99, stdev=171.94 00:11:18.752 lat (usec): min=125, max=2020, avg=530.52, stdev=176.71 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 167], 5.00th=[ 255], 10.00th=[ 293], 20.00th=[ 351], 00:11:18.752 | 30.00th=[ 400], 40.00th=[ 453], 50.00th=[ 494], 60.00th=[ 545], 00:11:18.752 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 750], 00:11:18.752 | 99.00th=[ 873], 99.50th=[ 1123], 99.90th=[ 1991], 99.95th=[ 1991], 00:11:18.752 | 99.99th=[ 1991] 00:11:18.752 bw ( KiB/s): min= 4096, max= 4096, per=36.99%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.752 lat (usec) : 250=4.14%, 500=44.92%, 750=43.05%, 1000=3.95% 00:11:18.752 lat (msec) : 2=0.56%, 50=3.38% 00:11:18.752 cpu : usr=0.97%, sys=1.95%, ctx=533, majf=0, minf=1 00:11:18.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.752 job1: (groupid=0, jobs=1): err= 0: pid=1914785: Wed Nov 20 10:28:50 2024 00:11:18.752 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:11:18.752 slat (nsec): min=9280, max=39488, avg=25974.95, stdev=6868.95 00:11:18.752 clat (usec): min=875, max=42951, avg=39888.34, stdev=9458.00 00:11:18.752 lat (usec): min=887, max=42979, avg=39914.32, stdev=9461.37 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 873], 5.00th=[ 873], 10.00th=[41157], 20.00th=[41681], 00:11:18.752 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:18.752 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:11:18.752 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:18.752 | 99.99th=[42730] 00:11:18.752 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:18.752 slat (nsec): min=6628, max=54736, avg=25296.46, stdev=11278.33 00:11:18.752 clat (usec): min=108, max=1185, avg=461.49, stdev=181.81 00:11:18.752 lat (usec): min=121, max=1196, avg=486.78, stdev=186.94 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 119], 5.00th=[ 135], 10.00th=[ 163], 20.00th=[ 297], 00:11:18.752 | 30.00th=[ 367], 40.00th=[ 429], 50.00th=[ 478], 60.00th=[ 519], 00:11:18.752 | 70.00th=[ 570], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 725], 00:11:18.752 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 1188], 99.95th=[ 1188], 00:11:18.752 | 99.99th=[ 1188] 00:11:18.752 bw ( KiB/s): min= 4096, max= 4096, per=36.99%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.752 lat (usec) : 250=12.99%, 500=40.49%, 750=39.17%, 1000=3.77% 00:11:18.752 lat (msec) : 2=0.19%, 50=3.39% 00:11:18.752 cpu : usr=0.59%, sys=1.39%, ctx=532, majf=0, minf=2 00:11:18.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.752 job2: (groupid=0, jobs=1): err= 0: pid=1914808: Wed Nov 20 10:28:50 2024 00:11:18.752 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:18.752 slat (nsec): min=8724, max=44075, avg=19349.63, stdev=9044.68 00:11:18.752 clat (usec): min=722, max=1352, avg=1115.19, stdev=79.17 00:11:18.752 lat (usec): min=732, max=1366, avg=1134.54, stdev=78.94 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 857], 5.00th=[ 971], 10.00th=[ 1020], 20.00th=[ 1074], 00:11:18.752 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:11:18.752 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:11:18.752 | 99.00th=[ 1270], 99.50th=[ 1336], 99.90th=[ 1352], 99.95th=[ 1352], 00:11:18.752 | 99.99th=[ 1352] 00:11:18.752 write: IOPS=797, BW=3189KiB/s (3265kB/s)(3192KiB/1001msec); 0 zone resets 00:11:18.752 slat (nsec): min=3422, max=65734, avg=14098.17, stdev=9925.00 00:11:18.752 clat (usec): min=250, max=917, avg=503.80, stdev=114.46 00:11:18.752 lat (usec): min=262, max=928, avg=517.90, stdev=117.08 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 351], 20.00th=[ 404], 00:11:18.752 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[ 498], 60.00th=[ 545], 00:11:18.752 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 685], 00:11:18.752 | 99.00th=[ 783], 99.50th=[ 816], 99.90th=[ 922], 99.95th=[ 922], 00:11:18.752 | 99.99th=[ 922] 00:11:18.752 bw ( KiB/s): min= 4096, max= 4096, per=36.99%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.752 lat (usec) : 500=30.76%, 750=29.39%, 1000=3.44% 00:11:18.752 lat (msec) : 2=36.41% 00:11:18.752 cpu : usr=0.90%, sys=2.20%, ctx=1311, majf=0, minf=1 00:11:18.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 issued rwts: total=512,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.752 job3: (groupid=0, jobs=1): err= 0: pid=1914815: Wed Nov 20 10:28:50 2024 00:11:18.752 read: IOPS=652, BW=2609KiB/s (2672kB/s)(2612KiB/1001msec) 00:11:18.752 slat (nsec): min=7363, max=68661, avg=24274.45, stdev=8568.96 00:11:18.752 clat (usec): min=180, max=1003, avg=693.06, stdev=146.05 00:11:18.752 lat (usec): min=189, max=1024, avg=717.34, stdev=147.91 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 334], 5.00th=[ 445], 10.00th=[ 490], 20.00th=[ 570], 00:11:18.752 | 30.00th=[ 619], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 742], 00:11:18.752 | 70.00th=[ 783], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 906], 00:11:18.752 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:18.752 | 99.99th=[ 1004] 00:11:18.752 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:18.752 slat (nsec): min=9973, max=77138, avg=32405.17, stdev=7889.92 00:11:18.752 clat (usec): min=132, max=834, avg=473.80, stdev=116.68 00:11:18.752 lat (usec): min=147, max=887, avg=506.20, stdev=118.86 00:11:18.752 clat percentiles (usec): 00:11:18.752 | 1.00th=[ 196], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 371], 00:11:18.752 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 506], 00:11:18.752 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 660], 00:11:18.752 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 832], 00:11:18.752 | 99.99th=[ 832] 00:11:18.752 bw ( KiB/s): min= 4096, max= 4096, per=36.99%, avg=4096.00, stdev= 0.00, samples=1 00:11:18.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:18.752 lat (usec) : 250=1.55%, 500=38.10%, 750=44.96%, 1000=15.32% 00:11:18.752 lat (msec) : 2=0.06% 00:11:18.752 cpu : usr=2.80%, sys=4.70%, ctx=1678, majf=0, minf=1 00:11:18.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.752 issued rwts: total=653,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.752 00:11:18.752 Run status group 0 (all jobs): 00:11:18.752 READ: bw=4685KiB/s (4797kB/s), 75.2KiB/s-2609KiB/s (77.0kB/s-2672kB/s), io=4816KiB (4932kB), run=1001-1028msec 00:11:18.752 WRITE: bw=10.8MiB/s (11.3MB/s), 1992KiB/s-4092KiB/s (2040kB/s-4190kB/s), io=11.1MiB (11.7MB), run=1001-1028msec 00:11:18.752 00:11:18.752 Disk stats (read/write): 00:11:18.752 nvme0n1: ios=70/512, merge=0/0, ticks=609/188, in_queue=797, util=87.27% 00:11:18.752 nvme0n2: ios=39/512, merge=0/0, ticks=1439/221, in_queue=1660, util=88.18% 00:11:18.752 nvme0n3: ios=565/512, merge=0/0, ticks=655/244, in_queue=899, util=95.36% 00:11:18.752 nvme0n4: ios=566/910, merge=0/0, ticks=422/406, in_queue=828, util=97.44% 00:11:18.752 10:28:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:18.752 [global] 00:11:18.752 thread=1 00:11:18.752 invalidate=1 00:11:18.752 rw=write 00:11:18.752 time_based=1 00:11:18.752 runtime=1 00:11:18.752 ioengine=libaio 00:11:18.752 direct=1 00:11:18.752 bs=4096 00:11:18.752 iodepth=128 00:11:18.752 norandommap=0 00:11:18.752 numjobs=1 00:11:18.752 00:11:18.752 verify_dump=1 00:11:18.752 verify_backlog=512 00:11:18.752 verify_state_save=0 00:11:18.752 do_verify=1 00:11:18.752 verify=crc32c-intel 00:11:18.752 [job0] 00:11:18.752 filename=/dev/nvme0n1 00:11:18.752 [job1] 00:11:18.752 filename=/dev/nvme0n2 00:11:18.752 [job2] 00:11:18.752 filename=/dev/nvme0n3 00:11:18.752 [job3] 00:11:18.752 filename=/dev/nvme0n4 00:11:18.752 Could not set queue depth (nvme0n1) 00:11:18.752 Could not set queue depth (nvme0n2) 00:11:18.752 Could not set queue depth (nvme0n3) 00:11:18.752 Could not set queue depth (nvme0n4) 00:11:19.012 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.012 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.012 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.012 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.012 fio-3.35 00:11:19.012 Starting 4 threads 00:11:20.395 00:11:20.395 job0: (groupid=0, jobs=1): err= 0: pid=1915257: Wed Nov 20 10:28:52 2024 00:11:20.395 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:11:20.395 slat (nsec): min=1099, max=17897k, avg=88935.88, stdev=851075.05 00:11:20.395 clat (usec): min=3464, max=43209, avg=15577.08, stdev=6974.32 00:11:20.395 lat (usec): min=3499, max=43234, avg=15666.02, stdev=7029.32 00:11:20.395 clat percentiles (usec): 00:11:20.395 | 1.00th=[ 3752], 5.00th=[ 5080], 10.00th=[ 7767], 20.00th=[10552], 00:11:20.395 | 30.00th=[11731], 40.00th=[13304], 50.00th=[14877], 60.00th=[15795], 00:11:20.395 | 70.00th=[17957], 80.00th=[20317], 90.00th=[24773], 95.00th=[30802], 00:11:20.395 | 99.00th=[33424], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:11:20.395 | 99.99th=[43254] 00:11:20.395 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec); 0 zone resets 00:11:20.395 slat (nsec): min=1670, max=11392k, avg=121100.17, stdev=784636.98 00:11:20.395 clat (usec): min=1263, max=110252, avg=17825.02, stdev=19486.10 00:11:20.395 lat (usec): min=1275, max=110260, avg=17946.12, stdev=19619.34 00:11:20.395 clat percentiles (msec): 00:11:20.395 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:11:20.395 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 14], 00:11:20.395 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 32], 95.00th=[ 60], 00:11:20.396 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 111], 00:11:20.396 | 99.99th=[ 111] 00:11:20.396 bw ( KiB/s): min=12424, max=18800, per=18.22%, avg=15612.00, stdev=4508.51, samples=2 00:11:20.396 iops : min= 3106, max= 4700, avg=3903.00, stdev=1127.13, samples=2 00:11:20.396 lat (msec) : 2=0.12%, 4=1.46%, 10=29.30%, 20=46.15%, 50=20.03% 00:11:20.396 lat (msec) : 100=1.64%, 250=1.30% 00:11:20.396 cpu : usr=2.69%, sys=5.67%, ctx=279, majf=0, minf=1 00:11:20.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:20.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.396 issued rwts: total=3584,4030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.396 job1: (groupid=0, jobs=1): err= 0: pid=1915273: Wed Nov 20 10:28:52 2024 00:11:20.396 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:11:20.396 slat (nsec): min=991, max=13975k, avg=96472.02, stdev=807552.70 00:11:20.396 clat (usec): min=1664, max=37830, avg=13205.66, stdev=6172.88 00:11:20.396 lat (usec): min=2694, max=37857, avg=13302.14, stdev=6242.49 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7635], 00:11:20.396 | 30.00th=[ 8455], 40.00th=[ 9634], 50.00th=[11994], 60.00th=[14484], 00:11:20.396 | 70.00th=[15533], 80.00th=[17171], 90.00th=[23987], 95.00th=[24511], 00:11:20.396 | 99.00th=[26346], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:11:20.396 | 99.99th=[38011] 00:11:20.396 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:20.396 slat (nsec): min=1674, max=13259k, avg=113256.68, stdev=764804.76 00:11:20.396 clat (usec): min=673, max=115822, avg=15530.34, stdev=20675.28 00:11:20.396 lat (usec): min=721, max=115830, avg=15643.59, stdev=20821.35 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 1188], 5.00th=[ 2180], 10.00th=[ 3818], 20.00th=[ 5800], 00:11:20.396 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 8225], 60.00th=[ 10159], 00:11:20.396 | 70.00th=[ 12125], 80.00th=[ 17957], 90.00th=[ 31589], 95.00th=[ 66323], 00:11:20.396 | 99.00th=[109577], 99.50th=[113771], 99.90th=[115868], 99.95th=[115868], 00:11:20.396 | 99.99th=[115868] 00:11:20.396 bw ( KiB/s): min=12528, max=24328, per=21.51%, avg=18428.00, stdev=8343.86, samples=2 00:11:20.396 iops : min= 3132, max= 6082, avg=4607.00, stdev=2085.97, samples=2 00:11:20.396 lat (usec) : 750=0.03%, 1000=0.18% 00:11:20.396 lat (msec) : 2=2.16%, 4=3.53%, 10=44.56%, 20=32.24%, 50=14.23% 00:11:20.396 lat (msec) : 100=1.80%, 250=1.26% 00:11:20.396 cpu : usr=3.09%, sys=5.88%, ctx=341, majf=0, minf=1 00:11:20.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:20.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.396 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.396 job2: (groupid=0, jobs=1): err= 0: pid=1915293: Wed Nov 20 10:28:52 2024 00:11:20.396 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:11:20.396 slat (nsec): min=1008, max=9996.4k, avg=62210.40, stdev=467188.66 00:11:20.396 clat (usec): min=2718, max=22325, avg=8074.93, stdev=2508.11 00:11:20.396 lat (usec): min=2740, max=22354, avg=8137.14, stdev=2541.13 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6390], 00:11:20.396 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7832], 00:11:20.396 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[11600], 95.00th=[13829], 00:11:20.396 | 99.00th=[16188], 99.50th=[16319], 99.90th=[19530], 99.95th=[19530], 00:11:20.396 | 99.99th=[22414] 00:11:20.396 write: IOPS=8381, BW=32.7MiB/s (34.3MB/s)(32.8MiB/1003msec); 0 zone resets 00:11:20.396 slat (nsec): min=1731, max=8863.7k, avg=52840.55, stdev=371573.17 00:11:20.396 clat (usec): min=1441, max=21689, avg=7196.66, stdev=2628.21 00:11:20.396 lat (usec): min=1803, max=21714, avg=7249.50, stdev=2650.32 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 2671], 5.00th=[ 3752], 10.00th=[ 4228], 20.00th=[ 5604], 00:11:20.396 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:11:20.396 | 70.00th=[ 7177], 80.00th=[ 8225], 90.00th=[11076], 95.00th=[12649], 00:11:20.396 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:11:20.396 | 99.99th=[21627] 00:11:20.396 bw ( KiB/s): min=32816, max=33424, per=38.66%, avg=33120.00, stdev=429.92, samples=2 00:11:20.396 iops : min= 8204, max= 8356, avg=8280.00, stdev=107.48, samples=2 00:11:20.396 lat (msec) : 2=0.08%, 4=4.11%, 10=81.26%, 20=14.52%, 50=0.02% 00:11:20.396 cpu : usr=6.79%, sys=8.48%, ctx=702, majf=0, minf=1 00:11:20.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:20.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.396 issued rwts: total=8192,8407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.396 job3: (groupid=0, jobs=1): err= 0: pid=1915299: Wed Nov 20 10:28:52 2024 00:11:20.396 read: IOPS=4112, BW=16.1MiB/s (16.8MB/s)(16.2MiB/1011msec) 00:11:20.396 slat (nsec): min=1029, max=13387k, avg=114970.46, stdev=832237.01 00:11:20.396 clat (usec): min=4077, max=47177, avg=14004.53, stdev=6203.74 00:11:20.396 lat (usec): min=4085, max=47179, avg=14119.50, stdev=6268.55 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 6718], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8717], 00:11:20.396 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[12518], 60.00th=[14222], 00:11:20.396 | 70.00th=[16188], 80.00th=[18482], 90.00th=[21103], 95.00th=[26870], 00:11:20.396 | 99.00th=[36963], 99.50th=[39060], 99.90th=[46924], 99.95th=[46924], 00:11:20.396 | 99.99th=[46924] 00:11:20.396 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:11:20.396 slat (nsec): min=1739, max=23221k, avg=108545.09, stdev=781152.86 00:11:20.396 clat (usec): min=2117, max=57711, avg=14324.86, stdev=7798.78 00:11:20.396 lat (usec): min=2125, max=57713, avg=14433.40, stdev=7845.06 00:11:20.396 clat percentiles (usec): 00:11:20.396 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6783], 20.00th=[ 9110], 00:11:20.396 | 30.00th=[10028], 40.00th=[11207], 50.00th=[13435], 60.00th=[13960], 00:11:20.396 | 70.00th=[15270], 80.00th=[17957], 90.00th=[21890], 95.00th=[29230], 00:11:20.396 | 99.00th=[50594], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:11:20.396 | 99.99th=[57934] 00:11:20.396 bw ( KiB/s): min=17344, max=18992, per=21.21%, avg=18168.00, stdev=1165.31, samples=2 00:11:20.396 iops : min= 4336, max= 4748, avg=4542.00, stdev=291.33, samples=2 00:11:20.396 lat (msec) : 4=0.16%, 10=29.96%, 20=56.02%, 50=13.32%, 100=0.54% 00:11:20.396 cpu : usr=3.86%, sys=4.85%, ctx=315, majf=0, minf=2 00:11:20.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:20.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.396 issued rwts: total=4158,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.396 00:11:20.396 Run status group 0 (all jobs): 00:11:20.396 READ: bw=77.9MiB/s (81.7MB/s), 13.9MiB/s-31.9MiB/s (14.6MB/s-33.5MB/s), io=78.7MiB (82.6MB), run=1003-1011msec 00:11:20.396 WRITE: bw=83.7MiB/s (87.7MB/s), 15.6MiB/s-32.7MiB/s (16.4MB/s-34.3MB/s), io=84.6MiB (88.7MB), run=1003-1011msec 00:11:20.396 00:11:20.396 Disk stats (read/write): 00:11:20.396 nvme0n1: ios=3121/3199, merge=0/0, ticks=49116/47594, in_queue=96710, util=84.17% 00:11:20.396 nvme0n2: ios=3114/3584, merge=0/0, ticks=41806/60003, in_queue=101809, util=90.93% 00:11:20.396 nvme0n3: ios=6707/7168, merge=0/0, ticks=51460/48408, in_queue=99868, util=95.04% 00:11:20.396 nvme0n4: ios=3621/3636, merge=0/0, ticks=49648/48553, in_queue=98201, util=96.91% 00:11:20.396 10:28:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:20.396 [global] 00:11:20.396 thread=1 00:11:20.396 invalidate=1 00:11:20.396 rw=randwrite 00:11:20.396 time_based=1 00:11:20.396 runtime=1 00:11:20.396 ioengine=libaio 00:11:20.396 direct=1 00:11:20.396 bs=4096 00:11:20.396 iodepth=128 00:11:20.396 norandommap=0 00:11:20.396 numjobs=1 00:11:20.396 00:11:20.396 verify_dump=1 00:11:20.396 verify_backlog=512 00:11:20.396 verify_state_save=0 00:11:20.396 do_verify=1 00:11:20.396 verify=crc32c-intel 00:11:20.396 [job0] 00:11:20.396 filename=/dev/nvme0n1 00:11:20.396 [job1] 00:11:20.396 filename=/dev/nvme0n2 00:11:20.396 [job2] 00:11:20.396 filename=/dev/nvme0n3 00:11:20.396 [job3] 00:11:20.396 filename=/dev/nvme0n4 00:11:20.396 Could not set queue depth (nvme0n1) 00:11:20.396 Could not set queue depth (nvme0n2) 00:11:20.396 Could not set queue depth (nvme0n3) 00:11:20.396 Could not set queue depth (nvme0n4) 00:11:20.656 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.656 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.656 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.656 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.656 fio-3.35 00:11:20.656 Starting 4 threads 00:11:22.039 00:11:22.039 job0: (groupid=0, jobs=1): err= 0: pid=1915744: Wed Nov 20 10:28:54 2024 00:11:22.039 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:22.039 slat (nsec): min=918, max=24049k, avg=113260.21, stdev=1012668.37 00:11:22.039 clat (usec): min=3828, max=77209, avg=14639.86, stdev=12661.31 00:11:22.039 lat (usec): min=3840, max=77216, avg=14753.12, stdev=12767.78 00:11:22.039 clat percentiles (usec): 00:11:22.039 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7504], 00:11:22.039 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9896], 00:11:22.039 | 70.00th=[11863], 80.00th=[21627], 90.00th=[36963], 95.00th=[44303], 00:11:22.039 | 99.00th=[58983], 99.50th=[60556], 99.90th=[61080], 99.95th=[67634], 00:11:22.039 | 99.99th=[77071] 00:11:22.039 write: IOPS=5372, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1002msec); 0 zone resets 00:11:22.039 slat (nsec): min=1555, max=7189.1k, avg=72594.86, stdev=377257.05 00:11:22.039 clat (usec): min=711, max=50518, avg=9645.76, stdev=6475.03 00:11:22.039 lat (usec): min=2733, max=50543, avg=9718.36, stdev=6515.84 00:11:22.039 clat percentiles (usec): 00:11:22.039 | 1.00th=[ 4015], 5.00th=[ 5407], 10.00th=[ 6652], 20.00th=[ 7242], 00:11:22.039 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 8094], 00:11:22.039 | 70.00th=[ 8717], 80.00th=[10028], 90.00th=[13566], 95.00th=[19268], 00:11:22.039 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:11:22.039 | 99.99th=[50594] 00:11:22.039 bw ( KiB/s): min=21240, max=21240, per=22.50%, avg=21240.00, stdev= 0.00, samples=1 00:11:22.039 iops : min= 5310, max= 5310, avg=5310.00, stdev= 0.00, samples=1 00:11:22.039 lat (usec) : 750=0.01% 00:11:22.039 lat (msec) : 4=0.53%, 10=70.18%, 20=16.44%, 50=11.38%, 100=1.46% 00:11:22.039 cpu : usr=3.50%, sys=6.09%, ctx=595, majf=0, minf=1 00:11:22.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:22.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.039 issued rwts: total=5120,5383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.039 job1: (groupid=0, jobs=1): err= 0: pid=1915766: Wed Nov 20 10:28:54 2024 00:11:22.039 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:11:22.039 slat (nsec): min=911, max=10360k, avg=73604.52, stdev=527578.62 00:11:22.039 clat (usec): min=1370, max=27417, avg=9754.83, stdev=3272.39 00:11:22.039 lat (usec): min=1378, max=27442, avg=9828.44, stdev=3307.13 00:11:22.039 clat percentiles (usec): 00:11:22.039 | 1.00th=[ 3621], 5.00th=[ 5538], 10.00th=[ 6521], 20.00th=[ 7504], 00:11:22.039 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[10028], 00:11:22.039 | 70.00th=[10421], 80.00th=[11863], 90.00th=[14222], 95.00th=[15401], 00:11:22.039 | 99.00th=[20841], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:11:22.039 | 99.99th=[27395] 00:11:22.039 write: IOPS=7017, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1004msec); 0 zone resets 00:11:22.039 slat (nsec): min=1497, max=8740.0k, avg=63787.02, stdev=459985.24 00:11:22.039 clat (usec): min=570, max=49347, avg=8836.70, stdev=6115.90 00:11:22.039 lat (usec): min=578, max=49349, avg=8900.49, stdev=6151.00 00:11:22.039 clat percentiles (usec): 00:11:22.039 | 1.00th=[ 1860], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 5538], 00:11:22.039 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7832], 00:11:22.039 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[13173], 95.00th=[21627], 00:11:22.039 | 99.00th=[38536], 99.50th=[45876], 99.90th=[48497], 99.95th=[49546], 00:11:22.039 | 99.99th=[49546] 00:11:22.039 bw ( KiB/s): min=25992, max=29360, per=29.31%, avg=27676.00, stdev=2381.54, samples=2 00:11:22.039 iops : min= 6498, max= 7340, avg=6919.00, stdev=595.38, samples=2 00:11:22.039 lat (usec) : 750=0.03%, 1000=0.13% 00:11:22.039 lat (msec) : 2=0.62%, 4=2.45%, 10=68.37%, 20=24.52%, 50=3.88% 00:11:22.039 cpu : usr=4.99%, sys=7.78%, ctx=462, majf=0, minf=1 00:11:22.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:22.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.040 issued rwts: total=6656,7046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.040 job2: (groupid=0, jobs=1): err= 0: pid=1915792: Wed Nov 20 10:28:54 2024 00:11:22.040 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:22.040 slat (nsec): min=905, max=20199k, avg=131112.50, stdev=939036.82 00:11:22.040 clat (usec): min=4883, max=65388, avg=17184.06, stdev=9919.50 00:11:22.040 lat (usec): min=4888, max=65415, avg=17315.18, stdev=9998.85 00:11:22.040 clat percentiles (usec): 00:11:22.040 | 1.00th=[ 5669], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8979], 00:11:22.040 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13960], 60.00th=[16909], 00:11:22.040 | 70.00th=[20317], 80.00th=[22938], 90.00th=[29754], 95.00th=[38536], 00:11:22.040 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[56361], 00:11:22.040 | 99.99th=[65274] 00:11:22.040 write: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1004msec); 0 zone resets 00:11:22.040 slat (nsec): min=1605, max=12657k, avg=105830.53, stdev=644135.20 00:11:22.040 clat (usec): min=1528, max=41424, avg=13701.75, stdev=8220.91 00:11:22.040 lat (usec): min=1551, max=41432, avg=13807.58, stdev=8279.40 00:11:22.040 clat percentiles (usec): 00:11:22.040 | 1.00th=[ 4047], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8029], 00:11:22.040 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[10159], 60.00th=[11338], 00:11:22.040 | 70.00th=[15270], 80.00th=[21627], 90.00th=[27657], 95.00th=[31589], 00:11:22.040 | 99.00th=[35914], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:22.040 | 99.99th=[41681] 00:11:22.040 bw ( KiB/s): min=16384, max=16384, per=17.35%, avg=16384.00, stdev= 0.00, samples=2 00:11:22.040 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:11:22.040 lat (msec) : 2=0.46%, 4=0.01%, 10=36.35%, 20=35.26%, 50=26.87% 00:11:22.040 lat (msec) : 100=1.04% 00:11:22.040 cpu : usr=3.09%, sys=4.79%, ctx=403, majf=0, minf=2 00:11:22.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:22.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.040 issued rwts: total=4096,4156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.040 job3: (groupid=0, jobs=1): err= 0: pid=1915805: Wed Nov 20 10:28:54 2024 00:11:22.040 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:11:22.040 slat (nsec): min=986, max=7606.6k, avg=72053.76, stdev=499227.57 00:11:22.040 clat (usec): min=2543, max=22244, avg=9650.78, stdev=2931.58 00:11:22.040 lat (usec): min=2554, max=22270, avg=9722.83, stdev=2967.29 00:11:22.040 clat percentiles (usec): 00:11:22.040 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7046], 00:11:22.040 | 30.00th=[ 7570], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9634], 00:11:22.040 | 70.00th=[10814], 80.00th=[11600], 90.00th=[13960], 95.00th=[15270], 00:11:22.040 | 99.00th=[19268], 99.50th=[20841], 99.90th=[20841], 99.95th=[20841], 00:11:22.040 | 99.99th=[22152] 00:11:22.040 write: IOPS=7090, BW=27.7MiB/s (29.0MB/s)(27.8MiB/1003msec); 0 zone resets 00:11:22.040 slat (nsec): min=1637, max=17555k, avg=66186.86, stdev=477049.98 00:11:22.040 clat (usec): min=472, max=31263, avg=8848.43, stdev=3485.95 00:11:22.040 lat (usec): min=1311, max=31297, avg=8914.62, stdev=3517.93 00:11:22.040 clat percentiles (usec): 00:11:22.040 | 1.00th=[ 3097], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 6718], 00:11:22.040 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8455], 00:11:22.040 | 70.00th=[ 9241], 80.00th=[10814], 90.00th=[14353], 95.00th=[15401], 00:11:22.040 | 99.00th=[18482], 99.50th=[24511], 99.90th=[24511], 99.95th=[24773], 00:11:22.040 | 99.99th=[31327] 00:11:22.040 bw ( KiB/s): min=24576, max=31296, per=29.59%, avg=27936.00, stdev=4751.76, samples=2 00:11:22.040 iops : min= 6144, max= 7824, avg=6984.00, stdev=1187.94, samples=2 00:11:22.040 lat (usec) : 500=0.01% 00:11:22.040 lat (msec) : 2=0.15%, 4=1.81%, 10=67.76%, 20=29.34%, 50=0.94% 00:11:22.040 cpu : usr=5.89%, sys=8.08%, ctx=474, majf=0, minf=1 00:11:22.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:22.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.040 issued rwts: total=6656,7112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.040 00:11:22.040 Run status group 0 (all jobs): 00:11:22.040 READ: bw=87.6MiB/s (91.9MB/s), 15.9MiB/s-25.9MiB/s (16.7MB/s-27.2MB/s), io=88.0MiB (92.3MB), run=1002-1004msec 00:11:22.040 WRITE: bw=92.2MiB/s (96.7MB/s), 16.2MiB/s-27.7MiB/s (17.0MB/s-29.0MB/s), io=92.6MiB (97.1MB), run=1002-1004msec 00:11:22.040 00:11:22.040 Disk stats (read/write): 00:11:22.040 nvme0n1: ios=3634/3926, merge=0/0, ticks=25241/17027, in_queue=42268, util=86.27% 00:11:22.040 nvme0n2: ios=5170/5511, merge=0/0, ticks=35936/38677, in_queue=74613, util=84.97% 00:11:22.040 nvme0n3: ios=3214/3584, merge=0/0, ticks=21898/20521, in_queue=42419, util=90.92% 00:11:22.040 nvme0n4: ios=5180/5279, merge=0/0, ticks=33112/31285, in_queue=64397, util=95.35% 00:11:22.040 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:22.040 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1915926 00:11:22.040 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:22.040 10:28:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:22.040 [global] 00:11:22.040 thread=1 00:11:22.040 invalidate=1 00:11:22.040 rw=read 00:11:22.040 time_based=1 00:11:22.040 runtime=10 00:11:22.040 ioengine=libaio 00:11:22.040 direct=1 00:11:22.040 bs=4096 00:11:22.040 iodepth=1 00:11:22.040 norandommap=1 00:11:22.040 numjobs=1 00:11:22.040 00:11:22.040 [job0] 00:11:22.040 filename=/dev/nvme0n1 00:11:22.040 [job1] 00:11:22.040 filename=/dev/nvme0n2 00:11:22.040 [job2] 00:11:22.040 filename=/dev/nvme0n3 00:11:22.040 [job3] 00:11:22.040 filename=/dev/nvme0n4 00:11:22.040 Could not set queue depth (nvme0n1) 00:11:22.040 Could not set queue depth (nvme0n2) 00:11:22.040 Could not set queue depth (nvme0n3) 00:11:22.040 Could not set queue depth (nvme0n4) 00:11:22.614 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.614 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.614 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.614 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.614 fio-3.35 00:11:22.614 Starting 4 threads 00:11:25.158 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:25.158 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10633216, buflen=4096 00:11:25.158 fio: pid=1916329, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.158 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:25.418 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.418 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:25.418 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13848576, buflen=4096 00:11:25.418 fio: pid=1916316, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.678 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.678 10:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:25.678 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=286720, buflen=4096 00:11:25.678 fio: pid=1916256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.678 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11776000, buflen=4096 00:11:25.678 fio: pid=1916290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:25.678 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.678 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:25.939 00:11:25.939 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1916256: Wed Nov 20 10:28:58 2024 00:11:25.939 read: IOPS=24, BW=95.1KiB/s (97.4kB/s)(280KiB/2943msec) 00:11:25.939 slat (usec): min=24, max=15692, avg=466.35, stdev=2606.81 00:11:25.939 clat (usec): min=711, max=43028, avg=41249.87, stdev=4940.43 00:11:25.939 lat (usec): min=754, max=57975, avg=41498.69, stdev=5325.57 00:11:25.939 clat percentiles (usec): 00:11:25.939 | 1.00th=[ 709], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:25.939 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:25.939 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:25.939 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:25.939 | 99.99th=[43254] 00:11:25.939 bw ( KiB/s): min= 96, max= 96, per=0.83%, avg=96.00, stdev= 0.00, samples=5 00:11:25.939 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:11:25.939 lat (usec) : 750=1.41% 00:11:25.939 lat (msec) : 50=97.18% 00:11:25.939 cpu : usr=0.00%, sys=0.10%, ctx=73, majf=0, minf=1 00:11:25.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.939 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.939 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.939 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1916290: Wed Nov 20 10:28:58 2024 00:11:25.939 read: IOPS=933, BW=3731KiB/s (3821kB/s)(11.2MiB/3082msec) 00:11:25.939 slat (usec): min=6, max=21468, avg=44.20, stdev=534.29 00:11:25.939 clat (usec): min=496, max=41800, avg=1012.30, stdev=771.11 00:11:25.939 lat (usec): min=502, max=41828, avg=1052.41, stdev=912.25 00:11:25.939 clat percentiles (usec): 00:11:25.939 | 1.00th=[ 750], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 947], 00:11:25.939 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:11:25.939 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:11:25.939 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 2409], 99.95th=[ 6194], 00:11:25.939 | 99.99th=[41681] 00:11:25.939 bw ( KiB/s): min= 3385, max= 3888, per=32.59%, avg=3774.83, stdev=192.35, samples=6 00:11:25.939 iops : min= 846, max= 972, avg=943.67, stdev=48.19, samples=6 00:11:25.939 lat (usec) : 500=0.03%, 750=0.97%, 1000=47.11% 00:11:25.939 lat (msec) : 2=51.74%, 4=0.03%, 10=0.03%, 50=0.03% 00:11:25.939 cpu : usr=2.27%, sys=3.25%, ctx=2880, majf=0, minf=2 00:11:25.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.939 issued rwts: total=2876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.939 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1916316: Wed Nov 20 10:28:58 2024 00:11:25.939 read: IOPS=1230, BW=4921KiB/s (5040kB/s)(13.2MiB/2748msec) 00:11:25.939 slat (usec): min=6, max=17802, avg=31.42, stdev=336.86 00:11:25.939 clat (usec): min=247, max=41796, avg=769.13, stdev=709.11 00:11:25.939 lat (usec): min=274, max=41804, avg=800.55, stdev=784.80 00:11:25.939 clat percentiles (usec): 00:11:25.939 | 1.00th=[ 562], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 709], 00:11:25.939 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 783], 00:11:25.939 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 848], 00:11:25.939 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 1106], 00:11:25.939 | 99.99th=[41681] 00:11:25.939 bw ( KiB/s): min= 5040, max= 5144, per=43.92%, avg=5086.40, stdev=45.75, samples=5 00:11:25.939 iops : min= 1260, max= 1286, avg=1271.60, stdev=11.44, samples=5 00:11:25.939 lat (usec) : 250=0.03%, 500=0.41%, 750=36.10%, 1000=63.36% 00:11:25.939 lat (msec) : 2=0.03%, 50=0.03% 00:11:25.940 cpu : usr=1.06%, sys=3.57%, ctx=3384, majf=0, minf=2 00:11:25.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.940 issued rwts: total=3382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.940 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1916329: Wed Nov 20 10:28:58 2024 00:11:25.940 read: IOPS=1020, BW=4079KiB/s (4176kB/s)(10.1MiB/2546msec) 00:11:25.940 slat (nsec): min=6876, max=61145, avg=26711.96, stdev=4366.75 00:11:25.940 clat (usec): min=411, max=1255, avg=937.26, stdev=120.09 00:11:25.940 lat (usec): min=438, max=1282, avg=963.97, stdev=121.50 00:11:25.940 clat percentiles (usec): 00:11:25.940 | 1.00th=[ 594], 5.00th=[ 685], 10.00th=[ 758], 20.00th=[ 832], 00:11:25.940 | 30.00th=[ 906], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:11:25.940 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:11:25.940 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1205], 00:11:25.940 | 99.99th=[ 1254] 00:11:25.940 bw ( KiB/s): min= 3904, max= 4808, per=35.65%, avg=4128.00, stdev=382.29, samples=5 00:11:25.940 iops : min= 976, max= 1202, avg=1032.00, stdev=95.57, samples=5 00:11:25.940 lat (usec) : 500=0.15%, 750=8.78%, 1000=55.03% 00:11:25.940 lat (msec) : 2=36.00% 00:11:25.940 cpu : usr=2.08%, sys=3.54%, ctx=2597, majf=0, minf=2 00:11:25.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.940 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.940 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.940 00:11:25.940 Run status group 0 (all jobs): 00:11:25.940 READ: bw=11.3MiB/s (11.9MB/s), 95.1KiB/s-4921KiB/s (97.4kB/s-5040kB/s), io=34.9MiB (36.5MB), run=2546-3082msec 00:11:25.940 00:11:25.940 Disk stats (read/write): 00:11:25.940 nvme0n1: ios=66/0, merge=0/0, ticks=2722/0, in_queue=2722, util=92.15% 00:11:25.940 nvme0n2: ios=2850/0, merge=0/0, ticks=2631/0, in_queue=2631, util=93.12% 00:11:25.940 nvme0n3: ios=3206/0, merge=0/0, ticks=2355/0, in_queue=2355, util=95.46% 00:11:25.940 nvme0n4: ios=2596/0, merge=0/0, ticks=2284/0, in_queue=2284, util=96.34% 00:11:25.940 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:25.940 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:26.200 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.200 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:26.464 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.464 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:26.464 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.464 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:26.745 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:26.745 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1915926 00:11:26.745 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:26.745 10:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:26.745 nvmf hotplug test: fio failed as expected 00:11:26.745 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.019 rmmod nvme_tcp 00:11:27.019 rmmod nvme_fabrics 00:11:27.019 rmmod nvme_keyring 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1912416 ']' 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1912416 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1912416 ']' 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1912416 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.019 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912416 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912416' 00:11:27.325 killing process with pid 1912416 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1912416 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1912416 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.325 10:28:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.235 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.235 00:11:29.235 real 0m29.405s 00:11:29.235 user 2m34.145s 00:11:29.235 sys 0m10.016s 00:11:29.235 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.235 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.235 ************************************ 00:11:29.235 END TEST nvmf_fio_target 00:11:29.235 ************************************ 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.496 ************************************ 00:11:29.496 START TEST nvmf_bdevio 00:11:29.496 ************************************ 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.496 * Looking for test storage... 00:11:29.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.496 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:29.497 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:29.758 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.759 --rc genhtml_branch_coverage=1 00:11:29.759 --rc genhtml_function_coverage=1 00:11:29.759 --rc genhtml_legend=1 00:11:29.759 --rc geninfo_all_blocks=1 00:11:29.759 --rc geninfo_unexecuted_blocks=1 00:11:29.759 00:11:29.759 ' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.759 --rc genhtml_branch_coverage=1 00:11:29.759 --rc genhtml_function_coverage=1 00:11:29.759 --rc genhtml_legend=1 00:11:29.759 --rc geninfo_all_blocks=1 00:11:29.759 --rc geninfo_unexecuted_blocks=1 00:11:29.759 00:11:29.759 ' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.759 --rc genhtml_branch_coverage=1 00:11:29.759 --rc genhtml_function_coverage=1 00:11:29.759 --rc genhtml_legend=1 00:11:29.759 --rc geninfo_all_blocks=1 00:11:29.759 --rc geninfo_unexecuted_blocks=1 00:11:29.759 00:11:29.759 ' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.759 --rc genhtml_branch_coverage=1 00:11:29.759 --rc genhtml_function_coverage=1 00:11:29.759 --rc genhtml_legend=1 00:11:29.759 --rc geninfo_all_blocks=1 00:11:29.759 --rc geninfo_unexecuted_blocks=1 00:11:29.759 00:11:29.759 ' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.759 10:29:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:37.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:37.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:37.900 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:37.900 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.900 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:11:37.901 00:11:37.901 --- 10.0.0.2 ping statistics --- 00:11:37.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.901 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:11:37.901 00:11:37.901 --- 10.0.0.1 ping statistics --- 00:11:37.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.901 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1921480 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1921480 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1921480 ']' 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.901 10:29:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.901 [2024-11-20 10:29:09.431680] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:11:37.901 [2024-11-20 10:29:09.431745] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.901 [2024-11-20 10:29:09.531155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.901 [2024-11-20 10:29:09.583284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.901 [2024-11-20 10:29:09.583336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.901 [2024-11-20 10:29:09.583344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.901 [2024-11-20 10:29:09.583352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.901 [2024-11-20 10:29:09.583358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.901 [2024-11-20 10:29:09.585705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:37.901 [2024-11-20 10:29:09.585868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:37.901 [2024-11-20 10:29:09.586028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.901 [2024-11-20 10:29:09.586028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:37.901 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.901 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:37.901 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.901 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.901 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 [2024-11-20 10:29:10.293256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 Malloc0 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 [2024-11-20 10:29:10.374449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:38.162 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:38.162 { 00:11:38.162 "params": { 00:11:38.162 "name": "Nvme$subsystem", 00:11:38.162 "trtype": "$TEST_TRANSPORT", 00:11:38.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.162 "adrfam": "ipv4", 00:11:38.162 "trsvcid": "$NVMF_PORT", 00:11:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.162 "hdgst": ${hdgst:-false}, 00:11:38.162 "ddgst": ${ddgst:-false} 00:11:38.163 }, 00:11:38.163 "method": "bdev_nvme_attach_controller" 00:11:38.163 } 00:11:38.163 EOF 00:11:38.163 )") 00:11:38.163 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:38.163 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:38.163 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:38.163 10:29:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:38.163 "params": { 00:11:38.163 "name": "Nvme1", 00:11:38.163 "trtype": "tcp", 00:11:38.163 "traddr": "10.0.0.2", 00:11:38.163 "adrfam": "ipv4", 00:11:38.163 "trsvcid": "4420", 00:11:38.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:38.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:38.163 "hdgst": false, 00:11:38.163 "ddgst": false 00:11:38.163 }, 00:11:38.163 "method": "bdev_nvme_attach_controller" 00:11:38.163 }' 00:11:38.163 [2024-11-20 10:29:10.433570] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:11:38.163 [2024-11-20 10:29:10.433631] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921580 ] 00:11:38.163 [2024-11-20 10:29:10.525937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.424 [2024-11-20 10:29:10.582612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.424 [2024-11-20 10:29:10.582775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.424 [2024-11-20 10:29:10.582776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.685 I/O targets: 00:11:38.685 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:38.685 00:11:38.685 00:11:38.685 CUnit - A unit testing framework for C - Version 2.1-3 00:11:38.685 http://cunit.sourceforge.net/ 00:11:38.685 00:11:38.685 00:11:38.685 Suite: bdevio tests on: Nvme1n1 00:11:38.685 Test: blockdev write read block ...passed 00:11:38.685 Test: blockdev write zeroes read block ...passed 00:11:38.685 Test: blockdev write zeroes read no split ...passed 00:11:38.685 Test: blockdev write zeroes read split ...passed 00:11:38.685 Test: blockdev write zeroes read split partial ...passed 00:11:38.685 Test: blockdev reset ...[2024-11-20 10:29:10.930777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:38.685 [2024-11-20 10:29:10.930870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d3970 (9): Bad file descriptor 00:11:38.685 [2024-11-20 10:29:10.984562] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:38.685 passed 00:11:38.685 Test: blockdev write read 8 blocks ...passed 00:11:38.685 Test: blockdev write read size > 128k ...passed 00:11:38.686 Test: blockdev write read invalid size ...passed 00:11:38.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:38.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:38.686 Test: blockdev write read max offset ...passed 00:11:38.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:38.947 Test: blockdev writev readv 8 blocks ...passed 00:11:38.947 Test: blockdev writev readv 30 x 1block ...passed 00:11:38.947 Test: blockdev writev readv block ...passed 00:11:38.947 Test: blockdev writev readv size > 128k ...passed 00:11:38.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:38.947 Test: blockdev comparev and writev ...[2024-11-20 10:29:11.246970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.947 [2024-11-20 10:29:11.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:38.947 [2024-11-20 10:29:11.247017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.947 [2024-11-20 10:29:11.247026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:38.947 [2024-11-20 10:29:11.247488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.247505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:38.948 [2024-11-20 10:29:11.247524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:38.948 [2024-11-20 10:29:11.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.247998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:38.948 [2024-11-20 10:29:11.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.248020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:38.948 [2024-11-20 10:29:11.248484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.248495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:38.948 [2024-11-20 10:29:11.248509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:38.948 [2024-11-20 10:29:11.248517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:38.948 passed 00:11:39.208 Test: blockdev nvme passthru rw ...passed 00:11:39.208 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:29:11.331827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.208 [2024-11-20 10:29:11.331840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:39.208 [2024-11-20 10:29:11.332162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.208 [2024-11-20 10:29:11.332173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:39.209 [2024-11-20 10:29:11.332457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.209 [2024-11-20 10:29:11.332468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:39.209 [2024-11-20 10:29:11.332827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.209 [2024-11-20 10:29:11.332837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:39.209 passed 00:11:39.209 Test: blockdev nvme admin passthru ...passed 00:11:39.209 Test: blockdev copy ...passed 00:11:39.209 00:11:39.209 Run Summary: Type Total Ran Passed Failed Inactive 00:11:39.209 suites 1 1 n/a 0 0 00:11:39.209 tests 23 23 23 0 0 00:11:39.209 asserts 152 152 152 0 n/a 00:11:39.209 00:11:39.209 Elapsed time = 1.203 seconds 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.209 rmmod nvme_tcp 00:11:39.209 rmmod nvme_fabrics 00:11:39.209 rmmod nvme_keyring 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1921480 ']' 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1921480 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1921480 ']' 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1921480 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.209 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921480 00:11:39.469 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:39.469 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:39.469 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921480' 00:11:39.469 killing process with pid 1921480 00:11:39.469 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1921480 00:11:39.469 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1921480 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.470 10:29:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.012 00:11:42.012 real 0m12.152s 00:11:42.012 user 0m13.008s 00:11:42.012 sys 0m6.244s 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.012 ************************************ 00:11:42.012 END TEST nvmf_bdevio 00:11:42.012 ************************************ 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:42.012 00:11:42.012 real 5m4.996s 00:11:42.012 user 11m51.418s 00:11:42.012 sys 1m52.628s 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.012 ************************************ 00:11:42.012 END TEST nvmf_target_core 00:11:42.012 ************************************ 00:11:42.012 10:29:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:42.012 10:29:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.012 10:29:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.012 10:29:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.012 ************************************ 00:11:42.012 START TEST nvmf_target_extra 00:11:42.012 ************************************ 00:11:42.012 10:29:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:42.012 * Looking for test storage... 00:11:42.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.012 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.013 --rc genhtml_branch_coverage=1 00:11:42.013 --rc genhtml_function_coverage=1 00:11:42.013 --rc genhtml_legend=1 00:11:42.013 --rc geninfo_all_blocks=1 00:11:42.013 --rc geninfo_unexecuted_blocks=1 00:11:42.013 00:11:42.013 ' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.013 --rc genhtml_branch_coverage=1 00:11:42.013 --rc genhtml_function_coverage=1 00:11:42.013 --rc genhtml_legend=1 00:11:42.013 --rc geninfo_all_blocks=1 00:11:42.013 --rc geninfo_unexecuted_blocks=1 00:11:42.013 00:11:42.013 ' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.013 --rc genhtml_branch_coverage=1 00:11:42.013 --rc genhtml_function_coverage=1 00:11:42.013 --rc genhtml_legend=1 00:11:42.013 --rc geninfo_all_blocks=1 00:11:42.013 --rc geninfo_unexecuted_blocks=1 00:11:42.013 00:11:42.013 ' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.013 --rc genhtml_branch_coverage=1 00:11:42.013 --rc genhtml_function_coverage=1 00:11:42.013 --rc genhtml_legend=1 00:11:42.013 --rc geninfo_all_blocks=1 00:11:42.013 --rc geninfo_unexecuted_blocks=1 00:11:42.013 00:11:42.013 ' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.013 ************************************ 00:11:42.013 START TEST nvmf_example 00:11:42.013 ************************************ 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.013 * Looking for test storage... 00:11:42.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.013 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.275 --rc genhtml_branch_coverage=1 00:11:42.275 --rc genhtml_function_coverage=1 00:11:42.275 --rc genhtml_legend=1 00:11:42.275 --rc geninfo_all_blocks=1 00:11:42.275 --rc geninfo_unexecuted_blocks=1 00:11:42.275 00:11:42.275 ' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.275 --rc genhtml_branch_coverage=1 00:11:42.275 --rc genhtml_function_coverage=1 00:11:42.275 --rc genhtml_legend=1 00:11:42.275 --rc geninfo_all_blocks=1 00:11:42.275 --rc geninfo_unexecuted_blocks=1 00:11:42.275 00:11:42.275 ' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.275 --rc genhtml_branch_coverage=1 00:11:42.275 --rc genhtml_function_coverage=1 00:11:42.275 --rc genhtml_legend=1 00:11:42.275 --rc geninfo_all_blocks=1 00:11:42.275 --rc geninfo_unexecuted_blocks=1 00:11:42.275 00:11:42.275 ' 00:11:42.275 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.275 --rc genhtml_branch_coverage=1 00:11:42.275 --rc genhtml_function_coverage=1 00:11:42.275 --rc genhtml_legend=1 00:11:42.275 --rc geninfo_all_blocks=1 00:11:42.275 --rc geninfo_unexecuted_blocks=1 00:11:42.275 00:11:42.275 ' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.276 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.415 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:50.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:50.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:50.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:50.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:11:50.416 00:11:50.416 --- 10.0.0.2 ping statistics --- 00:11:50.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.416 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:11:50.416 00:11:50.416 --- 10.0.0.1 ping statistics --- 00:11:50.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.416 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.416 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1926245 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1926245 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1926245 ']' 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.416 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.678 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.678 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:50.679 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:02.940 Initializing NVMe Controllers 00:12:02.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:02.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:02.940 Initialization complete. Launching workers. 00:12:02.940 ======================================================== 00:12:02.940 Latency(us) 00:12:02.940 Device Information : IOPS MiB/s Average min max 00:12:02.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18652.29 72.86 3431.97 639.59 15490.59 00:12:02.940 ======================================================== 00:12:02.940 Total : 18652.29 72.86 3431.97 639.59 15490.59 00:12:02.940 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.940 rmmod nvme_tcp 00:12:02.940 rmmod nvme_fabrics 00:12:02.940 rmmod nvme_keyring 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1926245 ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1926245 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1926245 ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1926245 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1926245 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1926245' 00:12:02.940 killing process with pid 1926245 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1926245 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1926245 00:12:02.940 nvmf threads initialize successfully 00:12:02.940 bdev subsystem init successfully 00:12:02.940 created a nvmf target service 00:12:02.940 create targets's poll groups done 00:12:02.940 all subsystems of target started 00:12:02.940 nvmf target is running 00:12:02.940 all subsystems of target stopped 00:12:02.940 destroy targets's poll groups done 00:12:02.940 destroyed the nvmf target service 00:12:02.940 bdev subsystem finish successfully 00:12:02.940 nvmf threads destroy successfully 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.940 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:03.510 00:12:03.510 real 0m21.447s 00:12:03.510 user 0m46.713s 00:12:03.510 sys 0m7.045s 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:03.510 ************************************ 00:12:03.510 END TEST nvmf_example 00:12:03.510 ************************************ 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.510 ************************************ 00:12:03.510 START TEST nvmf_filesystem 00:12:03.510 ************************************ 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:03.510 * Looking for test storage... 00:12:03.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.510 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.774 --rc genhtml_branch_coverage=1 00:12:03.774 --rc genhtml_function_coverage=1 00:12:03.774 --rc genhtml_legend=1 00:12:03.774 --rc geninfo_all_blocks=1 00:12:03.774 --rc geninfo_unexecuted_blocks=1 00:12:03.774 00:12:03.774 ' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.774 --rc genhtml_branch_coverage=1 00:12:03.774 --rc genhtml_function_coverage=1 00:12:03.774 --rc genhtml_legend=1 00:12:03.774 --rc geninfo_all_blocks=1 00:12:03.774 --rc geninfo_unexecuted_blocks=1 00:12:03.774 00:12:03.774 ' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.774 --rc genhtml_branch_coverage=1 00:12:03.774 --rc genhtml_function_coverage=1 00:12:03.774 --rc genhtml_legend=1 00:12:03.774 --rc geninfo_all_blocks=1 00:12:03.774 --rc geninfo_unexecuted_blocks=1 00:12:03.774 00:12:03.774 ' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.774 --rc genhtml_branch_coverage=1 00:12:03.774 --rc genhtml_function_coverage=1 00:12:03.774 --rc genhtml_legend=1 00:12:03.774 --rc geninfo_all_blocks=1 00:12:03.774 --rc geninfo_unexecuted_blocks=1 00:12:03.774 00:12:03.774 ' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:03.774 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:03.775 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:03.775 #define SPDK_CONFIG_H 00:12:03.775 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:03.775 #define SPDK_CONFIG_APPS 1 00:12:03.775 #define SPDK_CONFIG_ARCH native 00:12:03.775 #undef SPDK_CONFIG_ASAN 00:12:03.775 #undef SPDK_CONFIG_AVAHI 00:12:03.775 #undef SPDK_CONFIG_CET 00:12:03.775 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:03.775 #define SPDK_CONFIG_COVERAGE 1 00:12:03.775 #define SPDK_CONFIG_CROSS_PREFIX 00:12:03.775 #undef SPDK_CONFIG_CRYPTO 00:12:03.775 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:03.775 #undef SPDK_CONFIG_CUSTOMOCF 00:12:03.775 #undef SPDK_CONFIG_DAOS 00:12:03.775 #define SPDK_CONFIG_DAOS_DIR 00:12:03.775 #define SPDK_CONFIG_DEBUG 1 00:12:03.775 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:03.775 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:03.775 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:03.775 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:03.775 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:03.775 #undef SPDK_CONFIG_DPDK_UADK 00:12:03.775 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:03.775 #define SPDK_CONFIG_EXAMPLES 1 00:12:03.775 #undef SPDK_CONFIG_FC 00:12:03.775 #define SPDK_CONFIG_FC_PATH 00:12:03.775 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:03.775 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:03.775 #define SPDK_CONFIG_FSDEV 1 00:12:03.775 #undef SPDK_CONFIG_FUSE 00:12:03.775 #undef SPDK_CONFIG_FUZZER 00:12:03.775 #define SPDK_CONFIG_FUZZER_LIB 00:12:03.775 #undef SPDK_CONFIG_GOLANG 00:12:03.775 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:03.775 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:03.775 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:03.775 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:03.775 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:03.775 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:03.775 #undef SPDK_CONFIG_HAVE_LZ4 00:12:03.775 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:03.775 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:03.775 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:03.775 #define SPDK_CONFIG_IDXD 1 00:12:03.775 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:03.775 #undef SPDK_CONFIG_IPSEC_MB 00:12:03.775 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:03.775 #define SPDK_CONFIG_ISAL 1 00:12:03.775 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:03.775 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:03.775 #define SPDK_CONFIG_LIBDIR 00:12:03.775 #undef SPDK_CONFIG_LTO 00:12:03.775 #define SPDK_CONFIG_MAX_LCORES 128 00:12:03.775 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:03.775 #define SPDK_CONFIG_NVME_CUSE 1 00:12:03.775 #undef SPDK_CONFIG_OCF 00:12:03.775 #define SPDK_CONFIG_OCF_PATH 00:12:03.775 #define SPDK_CONFIG_OPENSSL_PATH 00:12:03.776 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:03.776 #define SPDK_CONFIG_PGO_DIR 00:12:03.776 #undef SPDK_CONFIG_PGO_USE 00:12:03.776 #define SPDK_CONFIG_PREFIX /usr/local 00:12:03.776 #undef SPDK_CONFIG_RAID5F 00:12:03.776 #undef SPDK_CONFIG_RBD 00:12:03.776 #define SPDK_CONFIG_RDMA 1 00:12:03.776 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:03.776 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:03.776 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:03.776 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:03.776 #define SPDK_CONFIG_SHARED 1 00:12:03.776 #undef SPDK_CONFIG_SMA 00:12:03.776 #define SPDK_CONFIG_TESTS 1 00:12:03.776 #undef SPDK_CONFIG_TSAN 00:12:03.776 #define SPDK_CONFIG_UBLK 1 00:12:03.776 #define SPDK_CONFIG_UBSAN 1 00:12:03.776 #undef SPDK_CONFIG_UNIT_TESTS 00:12:03.776 #undef SPDK_CONFIG_URING 00:12:03.776 #define SPDK_CONFIG_URING_PATH 00:12:03.776 #undef SPDK_CONFIG_URING_ZNS 00:12:03.776 #undef SPDK_CONFIG_USDT 00:12:03.776 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:03.776 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:03.776 #define SPDK_CONFIG_VFIO_USER 1 00:12:03.776 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:03.776 #define SPDK_CONFIG_VHOST 1 00:12:03.776 #define SPDK_CONFIG_VIRTIO 1 00:12:03.776 #undef SPDK_CONFIG_VTUNE 00:12:03.776 #define SPDK_CONFIG_VTUNE_DIR 00:12:03.776 #define SPDK_CONFIG_WERROR 1 00:12:03.776 #define SPDK_CONFIG_WPDK_DIR 00:12:03.776 #undef SPDK_CONFIG_XNVME 00:12:03.776 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.776 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:03.776 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:03.777 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:03.778 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1929029 ]] 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1929029 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.yxNM7n 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yxNM7n/tests/target /tmp/spdk.yxNM7n 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118393389056 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10963120128 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677789696 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=466944 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:03.779 * Looking for test storage... 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118393389056 00:12:03.779 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13177712640 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.780 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.041 --rc genhtml_branch_coverage=1 00:12:04.041 --rc genhtml_function_coverage=1 00:12:04.041 --rc genhtml_legend=1 00:12:04.041 --rc geninfo_all_blocks=1 00:12:04.041 --rc geninfo_unexecuted_blocks=1 00:12:04.041 00:12:04.041 ' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.041 --rc genhtml_branch_coverage=1 00:12:04.041 --rc genhtml_function_coverage=1 00:12:04.041 --rc genhtml_legend=1 00:12:04.041 --rc geninfo_all_blocks=1 00:12:04.041 --rc geninfo_unexecuted_blocks=1 00:12:04.041 00:12:04.041 ' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.041 --rc genhtml_branch_coverage=1 00:12:04.041 --rc genhtml_function_coverage=1 00:12:04.041 --rc genhtml_legend=1 00:12:04.041 --rc geninfo_all_blocks=1 00:12:04.041 --rc geninfo_unexecuted_blocks=1 00:12:04.041 00:12:04.041 ' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.041 --rc genhtml_branch_coverage=1 00:12:04.041 --rc genhtml_function_coverage=1 00:12:04.041 --rc genhtml_legend=1 00:12:04.041 --rc geninfo_all_blocks=1 00:12:04.041 --rc geninfo_unexecuted_blocks=1 00:12:04.041 00:12:04.041 ' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.041 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.042 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.177 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:12.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:12.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:12.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:12.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:12:12.178 00:12:12.178 --- 10.0.0.2 ping statistics --- 00:12:12.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.178 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:12.178 00:12:12.178 --- 10.0.0.1 ping statistics --- 00:12:12.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.178 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.178 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.179 ************************************ 00:12:12.179 START TEST nvmf_filesystem_no_in_capsule 00:12:12.179 ************************************ 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1932873 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1932873 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1932873 ']' 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.179 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.179 [2024-11-20 10:29:43.928518] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:12:12.179 [2024-11-20 10:29:43.928580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.179 [2024-11-20 10:29:44.030372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.179 [2024-11-20 10:29:44.083931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.179 [2024-11-20 10:29:44.083984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.179 [2024-11-20 10:29:44.083993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.179 [2024-11-20 10:29:44.084000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.179 [2024-11-20 10:29:44.084007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.179 [2024-11-20 10:29:44.086469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.179 [2024-11-20 10:29:44.086630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.179 [2024-11-20 10:29:44.086794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.179 [2024-11-20 10:29:44.086794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.439 [2024-11-20 10:29:44.803004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.439 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.699 Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.699 [2024-11-20 10:29:44.951514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.699 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.700 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.700 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:12.700 { 00:12:12.700 "name": "Malloc1", 00:12:12.700 "aliases": [ 00:12:12.700 "2b7e288a-ca0a-4bf6-ba49-a6484ea473da" 00:12:12.700 ], 00:12:12.700 "product_name": "Malloc disk", 00:12:12.700 "block_size": 512, 00:12:12.700 "num_blocks": 1048576, 00:12:12.700 "uuid": "2b7e288a-ca0a-4bf6-ba49-a6484ea473da", 00:12:12.700 "assigned_rate_limits": { 00:12:12.700 "rw_ios_per_sec": 0, 00:12:12.700 "rw_mbytes_per_sec": 0, 00:12:12.700 "r_mbytes_per_sec": 0, 00:12:12.700 "w_mbytes_per_sec": 0 00:12:12.700 }, 00:12:12.700 "claimed": true, 00:12:12.700 "claim_type": "exclusive_write", 00:12:12.700 "zoned": false, 00:12:12.700 "supported_io_types": { 00:12:12.700 "read": true, 00:12:12.700 "write": true, 00:12:12.700 "unmap": true, 00:12:12.700 "flush": true, 00:12:12.700 "reset": true, 00:12:12.700 "nvme_admin": false, 00:12:12.700 "nvme_io": false, 00:12:12.700 "nvme_io_md": false, 00:12:12.700 "write_zeroes": true, 00:12:12.700 "zcopy": true, 00:12:12.700 "get_zone_info": false, 00:12:12.700 "zone_management": false, 00:12:12.700 "zone_append": false, 00:12:12.700 "compare": false, 00:12:12.700 "compare_and_write": false, 00:12:12.700 "abort": true, 00:12:12.700 "seek_hole": false, 00:12:12.700 "seek_data": false, 00:12:12.700 "copy": true, 00:12:12.700 "nvme_iov_md": false 00:12:12.700 }, 00:12:12.700 "memory_domains": [ 00:12:12.700 { 00:12:12.700 "dma_device_id": "system", 00:12:12.700 "dma_device_type": 1 00:12:12.700 }, 00:12:12.700 { 00:12:12.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.700 "dma_device_type": 2 00:12:12.700 } 00:12:12.700 ], 00:12:12.700 "driver_specific": {} 00:12:12.700 } 00:12:12.700 ]' 00:12:12.700 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:12.700 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:12.700 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:12.960 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:12.960 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:12.960 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:12.960 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:12.960 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.344 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.344 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.344 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.344 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.344 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:16.887 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:16.888 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.888 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.468 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.407 ************************************ 00:12:18.407 START TEST filesystem_ext4 00:12:18.407 ************************************ 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:18.407 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:18.407 mke2fs 1.47.0 (5-Feb-2023) 00:12:18.408 Discarding device blocks: 0/522240 done 00:12:18.408 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:18.408 Filesystem UUID: a52e4076-6383-4b86-b955-2c8e4909e968 00:12:18.408 Superblock backups stored on blocks: 00:12:18.408 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:18.408 00:12:18.408 Allocating group tables: 0/64 done 00:12:18.408 Writing inode tables: 0/64 done 00:12:18.668 Creating journal (8192 blocks): done 00:12:18.668 Writing superblocks and filesystem accounting information: 0/64 done 00:12:18.668 00:12:18.668 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:18.668 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1932873 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.245 00:12:25.245 real 0m5.854s 00:12:25.245 user 0m0.028s 00:12:25.245 sys 0m0.078s 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 ************************************ 00:12:25.245 END TEST filesystem_ext4 00:12:25.245 ************************************ 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.245 ************************************ 00:12:25.245 START TEST filesystem_btrfs 00:12:25.245 ************************************ 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.245 btrfs-progs v6.8.1 00:12:25.245 See https://btrfs.readthedocs.io for more information. 00:12:25.245 00:12:25.245 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.245 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.245 this does not affect your deployments: 00:12:25.245 - DUP for metadata (-m dup) 00:12:25.245 - enabled no-holes (-O no-holes) 00:12:25.245 - enabled free-space-tree (-R free-space-tree) 00:12:25.245 00:12:25.245 Label: (null) 00:12:25.245 UUID: 12c73d64-1bd1-471f-8f8c-ada8c1bcd9b4 00:12:25.245 Node size: 16384 00:12:25.245 Sector size: 4096 (CPU page size: 4096) 00:12:25.245 Filesystem size: 510.00MiB 00:12:25.245 Block group profiles: 00:12:25.245 Data: single 8.00MiB 00:12:25.245 Metadata: DUP 32.00MiB 00:12:25.245 System: DUP 8.00MiB 00:12:25.245 SSD detected: yes 00:12:25.245 Zoned device: no 00:12:25.245 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.245 Checksum: crc32c 00:12:25.245 Number of devices: 1 00:12:25.245 Devices: 00:12:25.245 ID SIZE PATH 00:12:25.245 1 510.00MiB /dev/nvme0n1p1 00:12:25.245 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:25.245 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1932873 00:12:25.245 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.246 00:12:25.246 real 0m0.615s 00:12:25.246 user 0m0.042s 00:12:25.246 sys 0m0.107s 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.246 ************************************ 00:12:25.246 END TEST filesystem_btrfs 00:12:25.246 ************************************ 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.246 ************************************ 00:12:25.246 START TEST filesystem_xfs 00:12:25.246 ************************************ 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:25.246 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.246 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.246 = sectsz=512 attr=2, projid32bit=1 00:12:25.246 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.246 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.246 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.246 = sunit=0 swidth=0 blks 00:12:25.246 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.246 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.246 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.246 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:25.816 Discarding blocks...Done. 00:12:25.816 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:25.816 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:27.724 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1932873 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:27.724 00:12:27.724 real 0m2.794s 00:12:27.724 user 0m0.028s 00:12:27.724 sys 0m0.079s 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.724 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:27.724 ************************************ 00:12:27.724 END TEST filesystem_xfs 00:12:27.724 ************************************ 00:12:27.984 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.244 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.505 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1932873 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1932873 ']' 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1932873 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1932873 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1932873' 00:12:28.765 killing process with pid 1932873 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1932873 00:12:28.765 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1932873 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.026 00:12:29.026 real 0m17.329s 00:12:29.026 user 1m8.323s 00:12:29.026 sys 0m1.496s 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.026 ************************************ 00:12:29.026 END TEST nvmf_filesystem_no_in_capsule 00:12:29.026 ************************************ 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.026 ************************************ 00:12:29.026 START TEST nvmf_filesystem_in_capsule 00:12:29.026 ************************************ 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1936570 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1936570 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1936570 ']' 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.026 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.026 [2024-11-20 10:30:01.334821] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:12:29.026 [2024-11-20 10:30:01.334881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.285 [2024-11-20 10:30:01.428375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.285 [2024-11-20 10:30:01.469393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.285 [2024-11-20 10:30:01.469436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.285 [2024-11-20 10:30:01.469442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.285 [2024-11-20 10:30:01.469446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.285 [2024-11-20 10:30:01.469451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.285 [2024-11-20 10:30:01.470901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.285 [2024-11-20 10:30:01.471058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.285 [2024-11-20 10:30:01.471205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.285 [2024-11-20 10:30:01.471206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.853 [2024-11-20 10:30:02.191706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.853 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 Malloc1 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 [2024-11-20 10:30:02.308005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:30.113 { 00:12:30.113 "name": "Malloc1", 00:12:30.113 "aliases": [ 00:12:30.113 "56fd26d9-3f35-47f5-8dcb-240639b80d98" 00:12:30.113 ], 00:12:30.113 "product_name": "Malloc disk", 00:12:30.113 "block_size": 512, 00:12:30.113 "num_blocks": 1048576, 00:12:30.113 "uuid": "56fd26d9-3f35-47f5-8dcb-240639b80d98", 00:12:30.113 "assigned_rate_limits": { 00:12:30.113 "rw_ios_per_sec": 0, 00:12:30.113 "rw_mbytes_per_sec": 0, 00:12:30.113 "r_mbytes_per_sec": 0, 00:12:30.113 "w_mbytes_per_sec": 0 00:12:30.113 }, 00:12:30.113 "claimed": true, 00:12:30.113 "claim_type": "exclusive_write", 00:12:30.113 "zoned": false, 00:12:30.113 "supported_io_types": { 00:12:30.113 "read": true, 00:12:30.113 "write": true, 00:12:30.113 "unmap": true, 00:12:30.113 "flush": true, 00:12:30.113 "reset": true, 00:12:30.113 "nvme_admin": false, 00:12:30.113 "nvme_io": false, 00:12:30.113 "nvme_io_md": false, 00:12:30.113 "write_zeroes": true, 00:12:30.113 "zcopy": true, 00:12:30.113 "get_zone_info": false, 00:12:30.113 "zone_management": false, 00:12:30.113 "zone_append": false, 00:12:30.113 "compare": false, 00:12:30.113 "compare_and_write": false, 00:12:30.113 "abort": true, 00:12:30.113 "seek_hole": false, 00:12:30.113 "seek_data": false, 00:12:30.113 "copy": true, 00:12:30.113 "nvme_iov_md": false 00:12:30.113 }, 00:12:30.113 "memory_domains": [ 00:12:30.113 { 00:12:30.113 "dma_device_id": "system", 00:12:30.113 "dma_device_type": 1 00:12:30.113 }, 00:12:30.113 { 00:12:30.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.113 "dma_device_type": 2 00:12:30.113 } 00:12:30.113 ], 00:12:30.113 "driver_specific": {} 00:12:30.113 } 00:12:30.113 ]' 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:30.113 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.039 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.039 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.039 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.039 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.039 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:33.950 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:33.950 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:34.521 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 ************************************ 00:12:35.901 START TEST filesystem_in_capsule_ext4 00:12:35.901 ************************************ 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:35.901 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:35.901 mke2fs 1.47.0 (5-Feb-2023) 00:12:35.901 Discarding device blocks: 0/522240 done 00:12:35.901 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:35.901 Filesystem UUID: 1c65dce5-25d3-46cc-9b08-82ec10ee9671 00:12:35.901 Superblock backups stored on blocks: 00:12:35.901 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:35.901 00:12:35.901 Allocating group tables: 0/64 done 00:12:35.901 Writing inode tables: 0/64 done 00:12:35.901 Creating journal (8192 blocks): done 00:12:37.288 Writing superblocks and filesystem accounting information: 0/64 done 00:12:37.288 00:12:37.288 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:37.288 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1936570 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.989 00:12:43.989 real 0m7.312s 00:12:43.989 user 0m0.026s 00:12:43.989 sys 0m0.079s 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:43.989 ************************************ 00:12:43.989 END TEST filesystem_in_capsule_ext4 00:12:43.989 ************************************ 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.989 ************************************ 00:12:43.989 START TEST filesystem_in_capsule_btrfs 00:12:43.989 ************************************ 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:43.989 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:43.990 btrfs-progs v6.8.1 00:12:43.990 See https://btrfs.readthedocs.io for more information. 00:12:43.990 00:12:43.990 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:43.990 NOTE: several default settings have changed in version 5.15, please make sure 00:12:43.990 this does not affect your deployments: 00:12:43.990 - DUP for metadata (-m dup) 00:12:43.990 - enabled no-holes (-O no-holes) 00:12:43.990 - enabled free-space-tree (-R free-space-tree) 00:12:43.990 00:12:43.990 Label: (null) 00:12:43.990 UUID: 72096cec-af67-440c-8b76-7a99326c8ee2 00:12:43.990 Node size: 16384 00:12:43.990 Sector size: 4096 (CPU page size: 4096) 00:12:43.990 Filesystem size: 510.00MiB 00:12:43.990 Block group profiles: 00:12:43.990 Data: single 8.00MiB 00:12:43.990 Metadata: DUP 32.00MiB 00:12:43.990 System: DUP 8.00MiB 00:12:43.990 SSD detected: yes 00:12:43.990 Zoned device: no 00:12:43.990 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:43.990 Checksum: crc32c 00:12:43.990 Number of devices: 1 00:12:43.990 Devices: 00:12:43.990 ID SIZE PATH 00:12:43.990 1 510.00MiB /dev/nvme0n1p1 00:12:43.990 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:43.990 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1936570 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.990 00:12:43.990 real 0m0.894s 00:12:43.990 user 0m0.032s 00:12:43.990 sys 0m0.118s 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 ************************************ 00:12:43.990 END TEST filesystem_in_capsule_btrfs 00:12:43.990 ************************************ 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.990 ************************************ 00:12:43.990 START TEST filesystem_in_capsule_xfs 00:12:43.990 ************************************ 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:43.990 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:43.990 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:43.990 = sectsz=512 attr=2, projid32bit=1 00:12:43.990 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:43.990 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:43.990 data = bsize=4096 blocks=130560, imaxpct=25 00:12:43.990 = sunit=0 swidth=0 blks 00:12:43.990 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:43.990 log =internal log bsize=4096 blocks=16384, version=2 00:12:43.990 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:43.990 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:45.373 Discarding blocks...Done. 00:12:45.373 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:45.373 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1936570 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:47.950 00:12:47.950 real 0m3.608s 00:12:47.950 user 0m0.031s 00:12:47.950 sys 0m0.077s 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:47.950 ************************************ 00:12:47.950 END TEST filesystem_in_capsule_xfs 00:12:47.950 ************************************ 00:12:47.950 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:47.950 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:47.950 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:48.210 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1936570 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1936570 ']' 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1936570 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1936570 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1936570' 00:12:48.211 killing process with pid 1936570 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1936570 00:12:48.211 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1936570 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:48.471 00:12:48.471 real 0m19.391s 00:12:48.471 user 1m16.684s 00:12:48.471 sys 0m1.428s 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 ************************************ 00:12:48.471 END TEST nvmf_filesystem_in_capsule 00:12:48.471 ************************************ 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.471 rmmod nvme_tcp 00:12:48.471 rmmod nvme_fabrics 00:12:48.471 rmmod nvme_keyring 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.471 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.011 00:12:51.011 real 0m47.106s 00:12:51.011 user 2m27.392s 00:12:51.011 sys 0m8.881s 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:51.011 ************************************ 00:12:51.011 END TEST nvmf_filesystem 00:12:51.011 ************************************ 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.011 ************************************ 00:12:51.011 START TEST nvmf_target_discovery 00:12:51.011 ************************************ 00:12:51.011 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:51.011 * Looking for test storage... 00:12:51.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.011 --rc genhtml_branch_coverage=1 00:12:51.011 --rc genhtml_function_coverage=1 00:12:51.011 --rc genhtml_legend=1 00:12:51.011 --rc geninfo_all_blocks=1 00:12:51.011 --rc geninfo_unexecuted_blocks=1 00:12:51.011 00:12:51.011 ' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.011 --rc genhtml_branch_coverage=1 00:12:51.011 --rc genhtml_function_coverage=1 00:12:51.011 --rc genhtml_legend=1 00:12:51.011 --rc geninfo_all_blocks=1 00:12:51.011 --rc geninfo_unexecuted_blocks=1 00:12:51.011 00:12:51.011 ' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.011 --rc genhtml_branch_coverage=1 00:12:51.011 --rc genhtml_function_coverage=1 00:12:51.011 --rc genhtml_legend=1 00:12:51.011 --rc geninfo_all_blocks=1 00:12:51.011 --rc geninfo_unexecuted_blocks=1 00:12:51.011 00:12:51.011 ' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.011 --rc genhtml_branch_coverage=1 00:12:51.011 --rc genhtml_function_coverage=1 00:12:51.011 --rc genhtml_legend=1 00:12:51.011 --rc geninfo_all_blocks=1 00:12:51.011 --rc geninfo_unexecuted_blocks=1 00:12:51.011 00:12:51.011 ' 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.011 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.012 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.148 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:59.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:59.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:59.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:59.149 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.149 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:12:59.150 00:12:59.150 --- 10.0.0.2 ping statistics --- 00:12:59.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.150 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:59.150 00:12:59.150 --- 10.0.0.1 ping statistics --- 00:12:59.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.150 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1945115 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1945115 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1945115 ']' 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.150 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 [2024-11-20 10:30:30.770836] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:12:59.150 [2024-11-20 10:30:30.770905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.150 [2024-11-20 10:30:30.871974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.150 [2024-11-20 10:30:30.926301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.150 [2024-11-20 10:30:30.926350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.150 [2024-11-20 10:30:30.926359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.150 [2024-11-20 10:30:30.926366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.150 [2024-11-20 10:30:30.926372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.150 [2024-11-20 10:30:30.928645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.150 [2024-11-20 10:30:30.928807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.150 [2024-11-20 10:30:30.928949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.150 [2024-11-20 10:30:30.928950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 [2024-11-20 10:30:31.653570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 Null1 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 [2024-11-20 10:30:31.714025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 Null2 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.413 Null3 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.413 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 Null4 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.674 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:59.936 00:12:59.936 Discovery Log Number of Records 6, Generation counter 6 00:12:59.936 =====Discovery Log Entry 0====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: current discovery subsystem 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4420 00:12:59.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: explicit discovery connections, duplicate discovery information 00:12:59.936 sectype: none 00:12:59.936 =====Discovery Log Entry 1====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: nvme subsystem 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4420 00:12:59.936 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: none 00:12:59.936 sectype: none 00:12:59.936 =====Discovery Log Entry 2====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: nvme subsystem 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4420 00:12:59.936 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: none 00:12:59.936 sectype: none 00:12:59.936 =====Discovery Log Entry 3====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: nvme subsystem 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4420 00:12:59.936 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: none 00:12:59.936 sectype: none 00:12:59.936 =====Discovery Log Entry 4====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: nvme subsystem 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4420 00:12:59.936 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: none 00:12:59.936 sectype: none 00:12:59.936 =====Discovery Log Entry 5====== 00:12:59.936 trtype: tcp 00:12:59.936 adrfam: ipv4 00:12:59.936 subtype: discovery subsystem referral 00:12:59.936 treq: not required 00:12:59.936 portid: 0 00:12:59.936 trsvcid: 4430 00:12:59.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:59.936 traddr: 10.0.0.2 00:12:59.936 eflags: none 00:12:59.936 sectype: none 00:12:59.936 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:59.936 Perform nvmf subsystem discovery via RPC 00:12:59.936 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:59.936 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.936 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.936 [ 00:12:59.936 { 00:12:59.936 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.936 "subtype": "Discovery", 00:12:59.936 "listen_addresses": [ 00:12:59.936 { 00:12:59.936 "trtype": "TCP", 00:12:59.936 "adrfam": "IPv4", 00:12:59.936 "traddr": "10.0.0.2", 00:12:59.936 "trsvcid": "4420" 00:12:59.936 } 00:12:59.936 ], 00:12:59.936 "allow_any_host": true, 00:12:59.936 "hosts": [] 00:12:59.936 }, 00:12:59.936 { 00:12:59.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.936 "subtype": "NVMe", 00:12:59.936 "listen_addresses": [ 00:12:59.936 { 00:12:59.936 "trtype": "TCP", 00:12:59.936 "adrfam": "IPv4", 00:12:59.936 "traddr": "10.0.0.2", 00:12:59.936 "trsvcid": "4420" 00:12:59.936 } 00:12:59.936 ], 00:12:59.936 "allow_any_host": true, 00:12:59.936 "hosts": [], 00:12:59.936 "serial_number": "SPDK00000000000001", 00:12:59.936 "model_number": "SPDK bdev Controller", 00:12:59.936 "max_namespaces": 32, 00:12:59.936 "min_cntlid": 1, 00:12:59.936 "max_cntlid": 65519, 00:12:59.936 "namespaces": [ 00:12:59.936 { 00:12:59.936 "nsid": 1, 00:12:59.936 "bdev_name": "Null1", 00:12:59.936 "name": "Null1", 00:12:59.936 "nguid": "8932E85FF4484820BE68EA9754B0D757", 00:12:59.936 "uuid": "8932e85f-f448-4820-be68-ea9754b0d757" 00:12:59.936 } 00:12:59.936 ] 00:12:59.936 }, 00:12:59.936 { 00:12:59.936 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:59.936 "subtype": "NVMe", 00:12:59.936 "listen_addresses": [ 00:12:59.936 { 00:12:59.936 "trtype": "TCP", 00:12:59.936 "adrfam": "IPv4", 00:12:59.936 "traddr": "10.0.0.2", 00:12:59.936 "trsvcid": "4420" 00:12:59.936 } 00:12:59.936 ], 00:12:59.936 "allow_any_host": true, 00:12:59.936 "hosts": [], 00:12:59.936 "serial_number": "SPDK00000000000002", 00:12:59.936 "model_number": "SPDK bdev Controller", 00:12:59.936 "max_namespaces": 32, 00:12:59.936 "min_cntlid": 1, 00:12:59.936 "max_cntlid": 65519, 00:12:59.936 "namespaces": [ 00:12:59.936 { 00:12:59.936 "nsid": 1, 00:12:59.936 "bdev_name": "Null2", 00:12:59.936 "name": "Null2", 00:12:59.936 "nguid": "7F659745FD4245CEAB05CAC3EDDF4546", 00:12:59.936 "uuid": "7f659745-fd42-45ce-ab05-cac3eddf4546" 00:12:59.936 } 00:12:59.936 ] 00:12:59.936 }, 00:12:59.936 { 00:12:59.936 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:59.936 "subtype": "NVMe", 00:12:59.936 "listen_addresses": [ 00:12:59.936 { 00:12:59.936 "trtype": "TCP", 00:12:59.936 "adrfam": "IPv4", 00:12:59.936 "traddr": "10.0.0.2", 00:12:59.936 "trsvcid": "4420" 00:12:59.936 } 00:12:59.936 ], 00:12:59.936 "allow_any_host": true, 00:12:59.936 "hosts": [], 00:12:59.936 "serial_number": "SPDK00000000000003", 00:12:59.936 "model_number": "SPDK bdev Controller", 00:12:59.936 "max_namespaces": 32, 00:12:59.936 "min_cntlid": 1, 00:12:59.936 "max_cntlid": 65519, 00:12:59.936 "namespaces": [ 00:12:59.936 { 00:12:59.936 "nsid": 1, 00:12:59.936 "bdev_name": "Null3", 00:12:59.936 "name": "Null3", 00:12:59.936 "nguid": "29F7C4B4200841838ACEBD3D2FB6416F", 00:12:59.936 "uuid": "29f7c4b4-2008-4183-8ace-bd3d2fb6416f" 00:12:59.936 } 00:12:59.936 ] 00:12:59.936 }, 00:12:59.936 { 00:12:59.936 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:59.936 "subtype": "NVMe", 00:12:59.936 "listen_addresses": [ 00:12:59.936 { 00:12:59.936 "trtype": "TCP", 00:12:59.936 "adrfam": "IPv4", 00:12:59.936 "traddr": "10.0.0.2", 00:12:59.936 "trsvcid": "4420" 00:12:59.936 } 00:12:59.936 ], 00:12:59.936 "allow_any_host": true, 00:12:59.936 "hosts": [], 00:12:59.936 "serial_number": "SPDK00000000000004", 00:12:59.936 "model_number": "SPDK bdev Controller", 00:12:59.936 "max_namespaces": 32, 00:12:59.936 "min_cntlid": 1, 00:12:59.936 "max_cntlid": 65519, 00:12:59.936 "namespaces": [ 00:12:59.936 { 00:12:59.937 "nsid": 1, 00:12:59.937 "bdev_name": "Null4", 00:12:59.937 "name": "Null4", 00:12:59.937 "nguid": "89C58EDC86D34CAE80FE44D2491B52A1", 00:12:59.937 "uuid": "89c58edc-86d3-4cae-80fe-44d2491b52a1" 00:12:59.937 } 00:12:59.937 ] 00:12:59.937 } 00:12:59.937 ] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.937 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.198 rmmod nvme_tcp 00:13:00.198 rmmod nvme_fabrics 00:13:00.198 rmmod nvme_keyring 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1945115 ']' 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1945115 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1945115 ']' 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1945115 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1945115 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1945115' 00:13:00.198 killing process with pid 1945115 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1945115 00:13:00.198 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1945115 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.459 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:02.373 00:13:02.373 real 0m11.736s 00:13:02.373 user 0m8.997s 00:13:02.373 sys 0m6.185s 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.373 ************************************ 00:13:02.373 END TEST nvmf_target_discovery 00:13:02.373 ************************************ 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.373 10:30:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.635 ************************************ 00:13:02.635 START TEST nvmf_referrals 00:13:02.635 ************************************ 00:13:02.635 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:02.635 * Looking for test storage... 00:13:02.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.635 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:02.635 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.636 --rc genhtml_branch_coverage=1 00:13:02.636 --rc genhtml_function_coverage=1 00:13:02.636 --rc genhtml_legend=1 00:13:02.636 --rc geninfo_all_blocks=1 00:13:02.636 --rc geninfo_unexecuted_blocks=1 00:13:02.636 00:13:02.636 ' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.636 --rc genhtml_branch_coverage=1 00:13:02.636 --rc genhtml_function_coverage=1 00:13:02.636 --rc genhtml_legend=1 00:13:02.636 --rc geninfo_all_blocks=1 00:13:02.636 --rc geninfo_unexecuted_blocks=1 00:13:02.636 00:13:02.636 ' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.636 --rc genhtml_branch_coverage=1 00:13:02.636 --rc genhtml_function_coverage=1 00:13:02.636 --rc genhtml_legend=1 00:13:02.636 --rc geninfo_all_blocks=1 00:13:02.636 --rc geninfo_unexecuted_blocks=1 00:13:02.636 00:13:02.636 ' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.636 --rc genhtml_branch_coverage=1 00:13:02.636 --rc genhtml_function_coverage=1 00:13:02.636 --rc genhtml_legend=1 00:13:02.636 --rc geninfo_all_blocks=1 00:13:02.636 --rc geninfo_unexecuted_blocks=1 00:13:02.636 00:13:02.636 ' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.636 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.637 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.637 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.637 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.637 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.898 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.898 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.898 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.898 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.040 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.040 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.040 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:13:11.041 00:13:11.041 --- 10.0.0.2 ping statistics --- 00:13:11.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.041 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:13:11.041 00:13:11.041 --- 10.0.0.1 ping statistics --- 00:13:11.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.041 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1949766 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1949766 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1949766 ']' 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.041 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.041 [2024-11-20 10:30:42.633347] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:13:11.041 [2024-11-20 10:30:42.633415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.041 [2024-11-20 10:30:42.733173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.041 [2024-11-20 10:30:42.785466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.041 [2024-11-20 10:30:42.785517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.041 [2024-11-20 10:30:42.785527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.041 [2024-11-20 10:30:42.785534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.041 [2024-11-20 10:30:42.785541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.041 [2024-11-20 10:30:42.787967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.041 [2024-11-20 10:30:42.788132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.041 [2024-11-20 10:30:42.788294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.041 [2024-11-20 10:30:42.788432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 [2024-11-20 10:30:43.500561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 [2024-11-20 10:30:43.516881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:11.302 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:11.303 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:11.563 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.824 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.085 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.346 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.607 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:12.868 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:13.128 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.389 rmmod nvme_tcp 00:13:13.389 rmmod nvme_fabrics 00:13:13.389 rmmod nvme_keyring 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1949766 ']' 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1949766 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1949766 ']' 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1949766 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1949766 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1949766' 00:13:13.389 killing process with pid 1949766 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1949766 00:13:13.389 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1949766 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.648 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.649 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.649 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.649 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.649 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.649 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.562 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.822 00:13:15.822 real 0m13.169s 00:13:15.822 user 0m15.382s 00:13:15.822 sys 0m6.577s 00:13:15.822 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.822 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.822 ************************************ 00:13:15.823 END TEST nvmf_referrals 00:13:15.823 ************************************ 00:13:15.823 10:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:15.823 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.823 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.823 10:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.823 ************************************ 00:13:15.823 START TEST nvmf_connect_disconnect 00:13:15.823 ************************************ 00:13:15.823 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:15.823 * Looking for test storage... 00:13:15.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.823 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.823 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.823 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.085 --rc genhtml_branch_coverage=1 00:13:16.085 --rc genhtml_function_coverage=1 00:13:16.085 --rc genhtml_legend=1 00:13:16.085 --rc geninfo_all_blocks=1 00:13:16.085 --rc geninfo_unexecuted_blocks=1 00:13:16.085 00:13:16.085 ' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.085 --rc genhtml_branch_coverage=1 00:13:16.085 --rc genhtml_function_coverage=1 00:13:16.085 --rc genhtml_legend=1 00:13:16.085 --rc geninfo_all_blocks=1 00:13:16.085 --rc geninfo_unexecuted_blocks=1 00:13:16.085 00:13:16.085 ' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.085 --rc genhtml_branch_coverage=1 00:13:16.085 --rc genhtml_function_coverage=1 00:13:16.085 --rc genhtml_legend=1 00:13:16.085 --rc geninfo_all_blocks=1 00:13:16.085 --rc geninfo_unexecuted_blocks=1 00:13:16.085 00:13:16.085 ' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.085 --rc genhtml_branch_coverage=1 00:13:16.085 --rc genhtml_function_coverage=1 00:13:16.085 --rc genhtml_legend=1 00:13:16.085 --rc geninfo_all_blocks=1 00:13:16.085 --rc geninfo_unexecuted_blocks=1 00:13:16.085 00:13:16.085 ' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:16.085 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.086 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.238 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.239 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:13:24.240 00:13:24.240 --- 10.0.0.2 ping statistics --- 00:13:24.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.240 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:13:24.240 00:13:24.240 --- 10.0.0.1 ping statistics --- 00:13:24.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.240 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1954714 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1954714 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1954714 ']' 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.240 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.240 [2024-11-20 10:30:55.912689] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:13:24.240 [2024-11-20 10:30:55.912753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.240 [2024-11-20 10:30:56.012546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.240 [2024-11-20 10:30:56.066089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.240 [2024-11-20 10:30:56.066140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.240 [2024-11-20 10:30:56.066149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.240 [2024-11-20 10:30:56.066156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.240 [2024-11-20 10:30:56.066173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.240 [2024-11-20 10:30:56.068567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.240 [2024-11-20 10:30:56.068729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.240 [2024-11-20 10:30:56.068889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.240 [2024-11-20 10:30:56.068890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 [2024-11-20 10:30:56.789555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 [2024-11-20 10:30:56.867821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:24.501 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:28.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.814 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.814 rmmod nvme_tcp 00:13:43.075 rmmod nvme_fabrics 00:13:43.075 rmmod nvme_keyring 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1954714 ']' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1954714 ']' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1954714' 00:13:43.075 killing process with pid 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1954714 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.075 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:45.619 00:13:45.619 real 0m29.473s 00:13:45.619 user 1m19.227s 00:13:45.619 sys 0m7.208s 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:45.619 ************************************ 00:13:45.619 END TEST nvmf_connect_disconnect 00:13:45.619 ************************************ 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.619 ************************************ 00:13:45.619 START TEST nvmf_multitarget 00:13:45.619 ************************************ 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:45.619 * Looking for test storage... 00:13:45.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:45.619 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.620 --rc genhtml_branch_coverage=1 00:13:45.620 --rc genhtml_function_coverage=1 00:13:45.620 --rc genhtml_legend=1 00:13:45.620 --rc geninfo_all_blocks=1 00:13:45.620 --rc geninfo_unexecuted_blocks=1 00:13:45.620 00:13:45.620 ' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.620 --rc genhtml_branch_coverage=1 00:13:45.620 --rc genhtml_function_coverage=1 00:13:45.620 --rc genhtml_legend=1 00:13:45.620 --rc geninfo_all_blocks=1 00:13:45.620 --rc geninfo_unexecuted_blocks=1 00:13:45.620 00:13:45.620 ' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.620 --rc genhtml_branch_coverage=1 00:13:45.620 --rc genhtml_function_coverage=1 00:13:45.620 --rc genhtml_legend=1 00:13:45.620 --rc geninfo_all_blocks=1 00:13:45.620 --rc geninfo_unexecuted_blocks=1 00:13:45.620 00:13:45.620 ' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.620 --rc genhtml_branch_coverage=1 00:13:45.620 --rc genhtml_function_coverage=1 00:13:45.620 --rc genhtml_legend=1 00:13:45.620 --rc geninfo_all_blocks=1 00:13:45.620 --rc geninfo_unexecuted_blocks=1 00:13:45.620 00:13:45.620 ' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.620 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.621 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.876 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.877 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:13:53.877 00:13:53.877 --- 10.0.0.2 ping statistics --- 00:13:53.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.877 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:53.877 00:13:53.877 --- 10.0.0.1 ping statistics --- 00:13:53.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.877 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:53.877 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1962673 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1962673 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1962673 ']' 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.878 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:53.878 [2024-11-20 10:31:25.372962] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:13:53.878 [2024-11-20 10:31:25.373030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.878 [2024-11-20 10:31:25.473915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.878 [2024-11-20 10:31:25.527741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.878 [2024-11-20 10:31:25.527795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.878 [2024-11-20 10:31:25.527803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.878 [2024-11-20 10:31:25.527811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.878 [2024-11-20 10:31:25.527817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.878 [2024-11-20 10:31:25.529805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.878 [2024-11-20 10:31:25.529964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.878 [2024-11-20 10:31:25.530132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.878 [2024-11-20 10:31:25.530133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.878 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.878 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:53.878 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.878 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.878 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:54.170 "nvmf_tgt_1" 00:13:54.170 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:54.430 "nvmf_tgt_2" 00:13:54.430 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.430 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:54.430 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:54.430 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:54.430 true 00:13:54.692 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:54.692 true 00:13:54.692 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.692 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.692 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.692 rmmod nvme_tcp 00:13:54.953 rmmod nvme_fabrics 00:13:54.953 rmmod nvme_keyring 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1962673 ']' 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1962673 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1962673 ']' 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1962673 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962673 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962673' 00:13:54.953 killing process with pid 1962673 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1962673 00:13:54.953 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1962673 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.215 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.129 00:13:57.129 real 0m11.852s 00:13:57.129 user 0m10.250s 00:13:57.129 sys 0m6.173s 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:57.129 ************************************ 00:13:57.129 END TEST nvmf_multitarget 00:13:57.129 ************************************ 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.129 10:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.390 ************************************ 00:13:57.390 START TEST nvmf_rpc 00:13:57.390 ************************************ 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:57.390 * Looking for test storage... 00:13:57.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.390 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.391 --rc genhtml_branch_coverage=1 00:13:57.391 --rc genhtml_function_coverage=1 00:13:57.391 --rc genhtml_legend=1 00:13:57.391 --rc geninfo_all_blocks=1 00:13:57.391 --rc geninfo_unexecuted_blocks=1 00:13:57.391 00:13:57.391 ' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.391 --rc genhtml_branch_coverage=1 00:13:57.391 --rc genhtml_function_coverage=1 00:13:57.391 --rc genhtml_legend=1 00:13:57.391 --rc geninfo_all_blocks=1 00:13:57.391 --rc geninfo_unexecuted_blocks=1 00:13:57.391 00:13:57.391 ' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.391 --rc genhtml_branch_coverage=1 00:13:57.391 --rc genhtml_function_coverage=1 00:13:57.391 --rc genhtml_legend=1 00:13:57.391 --rc geninfo_all_blocks=1 00:13:57.391 --rc geninfo_unexecuted_blocks=1 00:13:57.391 00:13:57.391 ' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.391 --rc genhtml_branch_coverage=1 00:13:57.391 --rc genhtml_function_coverage=1 00:13:57.391 --rc genhtml_legend=1 00:13:57.391 --rc geninfo_all_blocks=1 00:13:57.391 --rc geninfo_unexecuted_blocks=1 00:13:57.391 00:13:57.391 ' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.391 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.392 10:31:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:05.531 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:05.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.531 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:05.532 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:05.532 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.532 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:05.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:14:05.532 00:14:05.532 --- 10.0.0.2 ping statistics --- 00:14:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.532 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:14:05.532 00:14:05.532 --- 10.0.0.1 ping statistics --- 00:14:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.532 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1967377 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1967377 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1967377 ']' 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.532 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.532 [2024-11-20 10:31:37.359964] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:14:05.532 [2024-11-20 10:31:37.360028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.532 [2024-11-20 10:31:37.462674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.532 [2024-11-20 10:31:37.516440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.532 [2024-11-20 10:31:37.516491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.532 [2024-11-20 10:31:37.516500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.532 [2024-11-20 10:31:37.516507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.532 [2024-11-20 10:31:37.516513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.532 [2024-11-20 10:31:37.518578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.532 [2024-11-20 10:31:37.518739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.532 [2024-11-20 10:31:37.518893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.532 [2024-11-20 10:31:37.518893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:06.104 "tick_rate": 2400000000, 00:14:06.104 "poll_groups": [ 00:14:06.104 { 00:14:06.104 "name": "nvmf_tgt_poll_group_000", 00:14:06.104 "admin_qpairs": 0, 00:14:06.104 "io_qpairs": 0, 00:14:06.104 "current_admin_qpairs": 0, 00:14:06.104 "current_io_qpairs": 0, 00:14:06.104 "pending_bdev_io": 0, 00:14:06.104 "completed_nvme_io": 0, 00:14:06.104 "transports": [] 00:14:06.104 }, 00:14:06.104 { 00:14:06.104 "name": "nvmf_tgt_poll_group_001", 00:14:06.104 "admin_qpairs": 0, 00:14:06.104 "io_qpairs": 0, 00:14:06.104 "current_admin_qpairs": 0, 00:14:06.104 "current_io_qpairs": 0, 00:14:06.104 "pending_bdev_io": 0, 00:14:06.104 "completed_nvme_io": 0, 00:14:06.104 "transports": [] 00:14:06.104 }, 00:14:06.104 { 00:14:06.104 "name": "nvmf_tgt_poll_group_002", 00:14:06.104 "admin_qpairs": 0, 00:14:06.104 "io_qpairs": 0, 00:14:06.104 "current_admin_qpairs": 0, 00:14:06.104 "current_io_qpairs": 0, 00:14:06.104 "pending_bdev_io": 0, 00:14:06.104 "completed_nvme_io": 0, 00:14:06.104 "transports": [] 00:14:06.104 }, 00:14:06.104 { 00:14:06.104 "name": "nvmf_tgt_poll_group_003", 00:14:06.104 "admin_qpairs": 0, 00:14:06.104 "io_qpairs": 0, 00:14:06.104 "current_admin_qpairs": 0, 00:14:06.104 "current_io_qpairs": 0, 00:14:06.104 "pending_bdev_io": 0, 00:14:06.104 "completed_nvme_io": 0, 00:14:06.104 "transports": [] 00:14:06.104 } 00:14:06.104 ] 00:14:06.104 }' 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:06.104 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.105 [2024-11-20 10:31:38.347372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:06.105 "tick_rate": 2400000000, 00:14:06.105 "poll_groups": [ 00:14:06.105 { 00:14:06.105 "name": "nvmf_tgt_poll_group_000", 00:14:06.105 "admin_qpairs": 0, 00:14:06.105 "io_qpairs": 0, 00:14:06.105 "current_admin_qpairs": 0, 00:14:06.105 "current_io_qpairs": 0, 00:14:06.105 "pending_bdev_io": 0, 00:14:06.105 "completed_nvme_io": 0, 00:14:06.105 "transports": [ 00:14:06.105 { 00:14:06.105 "trtype": "TCP" 00:14:06.105 } 00:14:06.105 ] 00:14:06.105 }, 00:14:06.105 { 00:14:06.105 "name": "nvmf_tgt_poll_group_001", 00:14:06.105 "admin_qpairs": 0, 00:14:06.105 "io_qpairs": 0, 00:14:06.105 "current_admin_qpairs": 0, 00:14:06.105 "current_io_qpairs": 0, 00:14:06.105 "pending_bdev_io": 0, 00:14:06.105 "completed_nvme_io": 0, 00:14:06.105 "transports": [ 00:14:06.105 { 00:14:06.105 "trtype": "TCP" 00:14:06.105 } 00:14:06.105 ] 00:14:06.105 }, 00:14:06.105 { 00:14:06.105 "name": "nvmf_tgt_poll_group_002", 00:14:06.105 "admin_qpairs": 0, 00:14:06.105 "io_qpairs": 0, 00:14:06.105 "current_admin_qpairs": 0, 00:14:06.105 "current_io_qpairs": 0, 00:14:06.105 "pending_bdev_io": 0, 00:14:06.105 "completed_nvme_io": 0, 00:14:06.105 "transports": [ 00:14:06.105 { 00:14:06.105 "trtype": "TCP" 00:14:06.105 } 00:14:06.105 ] 00:14:06.105 }, 00:14:06.105 { 00:14:06.105 "name": "nvmf_tgt_poll_group_003", 00:14:06.105 "admin_qpairs": 0, 00:14:06.105 "io_qpairs": 0, 00:14:06.105 "current_admin_qpairs": 0, 00:14:06.105 "current_io_qpairs": 0, 00:14:06.105 "pending_bdev_io": 0, 00:14:06.105 "completed_nvme_io": 0, 00:14:06.105 "transports": [ 00:14:06.105 { 00:14:06.105 "trtype": "TCP" 00:14:06.105 } 00:14:06.105 ] 00:14:06.105 } 00:14:06.105 ] 00:14:06.105 }' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:06.105 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 Malloc1 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 [2024-11-20 10:31:38.559079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:06.366 [2024-11-20 10:31:38.596052] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:06.366 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:06.366 could not add new controller: failed to write to nvme-fabrics device 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.366 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.278 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.278 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.278 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.278 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:08.278 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.193 [2024-11-20 10:31:42.359892] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:10.193 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:10.193 could not add new controller: failed to write to nvme-fabrics device 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.193 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.106 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.106 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.106 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.106 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:12.106 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:14.019 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.019 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 [2024-11-20 10:31:46.129460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.020 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.403 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.403 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.403 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.403 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:15.403 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.957 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.958 [2024-11-20 10:31:49.891413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.958 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.340 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.340 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.340 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.340 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.340 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.249 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.509 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.510 [2024-11-20 10:31:53.651546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.510 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.891 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.891 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.891 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.891 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.891 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.802 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.802 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.802 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 [2024-11-20 10:31:57.365337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.062 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.971 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.971 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:26.971 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.971 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:26.971 10:31:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:28.883 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 [2024-11-20 10:32:01.115023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.883 10:32:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.793 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.793 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:30.794 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.794 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:30.794 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.707 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 [2024-11-20 10:32:04.881724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 [2024-11-20 10:32:04.953902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 [2024-11-20 10:32:05.022099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.708 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 [2024-11-20 10:32:05.094325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 [2024-11-20 10:32:05.166559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:32.971 "tick_rate": 2400000000, 00:14:32.971 "poll_groups": [ 00:14:32.971 { 00:14:32.971 "name": "nvmf_tgt_poll_group_000", 00:14:32.971 "admin_qpairs": 0, 00:14:32.971 "io_qpairs": 224, 00:14:32.971 "current_admin_qpairs": 0, 00:14:32.971 "current_io_qpairs": 0, 00:14:32.971 "pending_bdev_io": 0, 00:14:32.971 "completed_nvme_io": 274, 00:14:32.971 "transports": [ 00:14:32.971 { 00:14:32.971 "trtype": "TCP" 00:14:32.971 } 00:14:32.971 ] 00:14:32.971 }, 00:14:32.971 { 00:14:32.971 "name": "nvmf_tgt_poll_group_001", 00:14:32.971 "admin_qpairs": 1, 00:14:32.971 "io_qpairs": 223, 00:14:32.971 "current_admin_qpairs": 0, 00:14:32.971 "current_io_qpairs": 0, 00:14:32.971 "pending_bdev_io": 0, 00:14:32.971 "completed_nvme_io": 518, 00:14:32.971 "transports": [ 00:14:32.971 { 00:14:32.971 "trtype": "TCP" 00:14:32.971 } 00:14:32.971 ] 00:14:32.971 }, 00:14:32.971 { 00:14:32.971 "name": "nvmf_tgt_poll_group_002", 00:14:32.971 "admin_qpairs": 6, 00:14:32.971 "io_qpairs": 218, 00:14:32.971 "current_admin_qpairs": 0, 00:14:32.971 "current_io_qpairs": 0, 00:14:32.971 "pending_bdev_io": 0, 00:14:32.971 "completed_nvme_io": 222, 00:14:32.971 "transports": [ 00:14:32.971 { 00:14:32.971 "trtype": "TCP" 00:14:32.971 } 00:14:32.971 ] 00:14:32.971 }, 00:14:32.971 { 00:14:32.971 "name": "nvmf_tgt_poll_group_003", 00:14:32.971 "admin_qpairs": 0, 00:14:32.971 "io_qpairs": 224, 00:14:32.971 "current_admin_qpairs": 0, 00:14:32.971 "current_io_qpairs": 0, 00:14:32.971 "pending_bdev_io": 0, 00:14:32.971 "completed_nvme_io": 225, 00:14:32.971 "transports": [ 00:14:32.971 { 00:14:32.971 "trtype": "TCP" 00:14:32.971 } 00:14:32.971 ] 00:14:32.971 } 00:14:32.971 ] 00:14:32.971 }' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:32.971 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.972 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:32.972 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.972 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.232 rmmod nvme_tcp 00:14:33.232 rmmod nvme_fabrics 00:14:33.232 rmmod nvme_keyring 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1967377 ']' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1967377 ']' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967377' 00:14:33.232 killing process with pid 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1967377 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.232 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.772 00:14:35.772 real 0m38.159s 00:14:35.772 user 1m54.205s 00:14:35.772 sys 0m7.976s 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.772 ************************************ 00:14:35.772 END TEST nvmf_rpc 00:14:35.772 ************************************ 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.772 ************************************ 00:14:35.772 START TEST nvmf_invalid 00:14:35.772 ************************************ 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:35.772 * Looking for test storage... 00:14:35.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.772 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.773 --rc genhtml_branch_coverage=1 00:14:35.773 --rc genhtml_function_coverage=1 00:14:35.773 --rc genhtml_legend=1 00:14:35.773 --rc geninfo_all_blocks=1 00:14:35.773 --rc geninfo_unexecuted_blocks=1 00:14:35.773 00:14:35.773 ' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.773 --rc genhtml_branch_coverage=1 00:14:35.773 --rc genhtml_function_coverage=1 00:14:35.773 --rc genhtml_legend=1 00:14:35.773 --rc geninfo_all_blocks=1 00:14:35.773 --rc geninfo_unexecuted_blocks=1 00:14:35.773 00:14:35.773 ' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.773 --rc genhtml_branch_coverage=1 00:14:35.773 --rc genhtml_function_coverage=1 00:14:35.773 --rc genhtml_legend=1 00:14:35.773 --rc geninfo_all_blocks=1 00:14:35.773 --rc geninfo_unexecuted_blocks=1 00:14:35.773 00:14:35.773 ' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:35.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.773 --rc genhtml_branch_coverage=1 00:14:35.773 --rc genhtml_function_coverage=1 00:14:35.773 --rc genhtml_legend=1 00:14:35.773 --rc geninfo_all_blocks=1 00:14:35.773 --rc geninfo_unexecuted_blocks=1 00:14:35.773 00:14:35.773 ' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.773 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.916 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:43.917 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:43.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:43.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:43.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:14:43.917 00:14:43.917 --- 10.0.0.2 ping statistics --- 00:14:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.917 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:14:43.917 00:14:43.917 --- 10.0.0.1 ping statistics --- 00:14:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.917 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1977231 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1977231 00:14:43.917 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1977231 ']' 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.918 10:32:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:43.918 [2024-11-20 10:32:15.552139] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:14:43.918 [2024-11-20 10:32:15.552221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.918 [2024-11-20 10:32:15.652972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.918 [2024-11-20 10:32:15.705336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.918 [2024-11-20 10:32:15.705387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.918 [2024-11-20 10:32:15.705396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.918 [2024-11-20 10:32:15.705403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.918 [2024-11-20 10:32:15.705409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.918 [2024-11-20 10:32:15.707574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.918 [2024-11-20 10:32:15.707736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.918 [2024-11-20 10:32:15.707898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.918 [2024-11-20 10:32:15.707898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:44.178 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29160 00:14:44.439 [2024-11-20 10:32:16.648823] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:44.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:44.439 { 00:14:44.439 "nqn": "nqn.2016-06.io.spdk:cnode29160", 00:14:44.439 "tgt_name": "foobar", 00:14:44.439 "method": "nvmf_create_subsystem", 00:14:44.439 "req_id": 1 00:14:44.439 } 00:14:44.439 Got JSON-RPC error response 00:14:44.439 response: 00:14:44.439 { 00:14:44.439 "code": -32603, 00:14:44.439 "message": "Unable to find target foobar" 00:14:44.439 }' 00:14:44.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:44.439 { 00:14:44.439 "nqn": "nqn.2016-06.io.spdk:cnode29160", 00:14:44.439 "tgt_name": "foobar", 00:14:44.439 "method": "nvmf_create_subsystem", 00:14:44.439 "req_id": 1 00:14:44.439 } 00:14:44.439 Got JSON-RPC error response 00:14:44.439 response: 00:14:44.439 { 00:14:44.439 "code": -32603, 00:14:44.439 "message": "Unable to find target foobar" 00:14:44.439 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:44.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:44.439 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14773 00:14:44.701 [2024-11-20 10:32:16.857669] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14773: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:44.701 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:44.701 { 00:14:44.701 "nqn": "nqn.2016-06.io.spdk:cnode14773", 00:14:44.701 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:44.701 "method": "nvmf_create_subsystem", 00:14:44.701 "req_id": 1 00:14:44.701 } 00:14:44.701 Got JSON-RPC error response 00:14:44.701 response: 00:14:44.701 { 00:14:44.701 "code": -32602, 00:14:44.701 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:44.701 }' 00:14:44.701 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:44.701 { 00:14:44.701 "nqn": "nqn.2016-06.io.spdk:cnode14773", 00:14:44.701 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:44.701 "method": "nvmf_create_subsystem", 00:14:44.701 "req_id": 1 00:14:44.701 } 00:14:44.701 Got JSON-RPC error response 00:14:44.701 response: 00:14:44.701 { 00:14:44.701 "code": -32602, 00:14:44.701 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:44.701 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:44.701 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:44.701 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23618 00:14:44.701 [2024-11-20 10:32:17.066467] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23618: invalid model number 'SPDK_Controller' 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:44.962 { 00:14:44.962 "nqn": "nqn.2016-06.io.spdk:cnode23618", 00:14:44.962 "model_number": "SPDK_Controller\u001f", 00:14:44.962 "method": "nvmf_create_subsystem", 00:14:44.962 "req_id": 1 00:14:44.962 } 00:14:44.962 Got JSON-RPC error response 00:14:44.962 response: 00:14:44.962 { 00:14:44.962 "code": -32602, 00:14:44.962 "message": "Invalid MN SPDK_Controller\u001f" 00:14:44.962 }' 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:44.962 { 00:14:44.962 "nqn": "nqn.2016-06.io.spdk:cnode23618", 00:14:44.962 "model_number": "SPDK_Controller\u001f", 00:14:44.962 "method": "nvmf_create_subsystem", 00:14:44.962 "req_id": 1 00:14:44.962 } 00:14:44.962 Got JSON-RPC error response 00:14:44.962 response: 00:14:44.962 { 00:14:44.962 "code": -32602, 00:14:44.962 "message": "Invalid MN SPDK_Controller\u001f" 00:14:44.962 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:44.962 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'f>W1{xFTn4eI>}xevldF' 00:14:44.963 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'f>W1{xFTn4eI>}xevldF' nqn.2016-06.io.spdk:cnode9590 00:14:45.225 [2024-11-20 10:32:17.447873] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9590: invalid serial number 'f>W1{xFTn4eI>}xevldF' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:45.225 { 00:14:45.225 "nqn": "nqn.2016-06.io.spdk:cnode9590", 00:14:45.225 "serial_number": "f>W1{xF\u007fTn4eI>}xevldF", 00:14:45.225 "method": "nvmf_create_subsystem", 00:14:45.225 "req_id": 1 00:14:45.225 } 00:14:45.225 Got JSON-RPC error response 00:14:45.225 response: 00:14:45.225 { 00:14:45.225 "code": -32602, 00:14:45.225 "message": "Invalid SN f>W1{xF\u007fTn4eI>}xevldF" 00:14:45.225 }' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:45.225 { 00:14:45.225 "nqn": "nqn.2016-06.io.spdk:cnode9590", 00:14:45.225 "serial_number": "f>W1{xF\u007fTn4eI>}xevldF", 00:14:45.225 "method": "nvmf_create_subsystem", 00:14:45.225 "req_id": 1 00:14:45.225 } 00:14:45.225 Got JSON-RPC error response 00:14:45.225 response: 00:14:45.225 { 00:14:45.225 "code": -32602, 00:14:45.225 "message": "Invalid SN f>W1{xF\u007fTn4eI>}xevldF" 00:14:45.225 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.225 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.226 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:45.487 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!e>\?DM$~8KJ19d|{/aV,Nk0+lEYR?ly.{}-j$e}' 00:14:45.488 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!e>\?DM$~8KJ19d|{/aV,Nk0+lEYR?ly.{}-j$e}' nqn.2016-06.io.spdk:cnode3618 00:14:45.749 [2024-11-20 10:32:17.989888] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3618: invalid model number '!e>\?DM$~8KJ19d|{/aV,Nk0+lEYR?ly.{}-j$e}' 00:14:45.749 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:45.749 { 00:14:45.749 "nqn": "nqn.2016-06.io.spdk:cnode3618", 00:14:45.749 "model_number": "!e>\\?DM$~8KJ19d|{/aV,Nk0\u007f+lEYR?ly.{}-j$e}", 00:14:45.749 "method": "nvmf_create_subsystem", 00:14:45.749 "req_id": 1 00:14:45.749 } 00:14:45.749 Got JSON-RPC error response 00:14:45.749 response: 00:14:45.749 { 00:14:45.749 "code": -32602, 00:14:45.749 "message": "Invalid MN !e>\\?DM$~8KJ19d|{/aV,Nk0\u007f+lEYR?ly.{}-j$e}" 00:14:45.749 }' 00:14:45.749 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:45.749 { 00:14:45.749 "nqn": "nqn.2016-06.io.spdk:cnode3618", 00:14:45.749 "model_number": "!e>\\?DM$~8KJ19d|{/aV,Nk0\u007f+lEYR?ly.{}-j$e}", 00:14:45.749 "method": "nvmf_create_subsystem", 00:14:45.749 "req_id": 1 00:14:45.749 } 00:14:45.749 Got JSON-RPC error response 00:14:45.749 response: 00:14:45.749 { 00:14:45.749 "code": -32602, 00:14:45.749 "message": "Invalid MN !e>\\?DM$~8KJ19d|{/aV,Nk0\u007f+lEYR?ly.{}-j$e}" 00:14:45.749 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:45.749 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:46.009 [2024-11-20 10:32:18.174577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.009 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:46.270 [2024-11-20 10:32:18.559755] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:46.270 { 00:14:46.270 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:46.270 "listen_address": { 00:14:46.270 "trtype": "tcp", 00:14:46.270 "traddr": "", 00:14:46.270 "trsvcid": "4421" 00:14:46.270 }, 00:14:46.270 "method": "nvmf_subsystem_remove_listener", 00:14:46.270 "req_id": 1 00:14:46.270 } 00:14:46.270 Got JSON-RPC error response 00:14:46.270 response: 00:14:46.270 { 00:14:46.270 "code": -32602, 00:14:46.270 "message": "Invalid parameters" 00:14:46.270 }' 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:46.270 { 00:14:46.270 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:46.270 "listen_address": { 00:14:46.270 "trtype": "tcp", 00:14:46.270 "traddr": "", 00:14:46.270 "trsvcid": "4421" 00:14:46.270 }, 00:14:46.270 "method": "nvmf_subsystem_remove_listener", 00:14:46.270 "req_id": 1 00:14:46.270 } 00:14:46.270 Got JSON-RPC error response 00:14:46.270 response: 00:14:46.270 { 00:14:46.270 "code": -32602, 00:14:46.270 "message": "Invalid parameters" 00:14:46.270 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:46.270 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24430 -i 0 00:14:46.530 [2024-11-20 10:32:18.748332] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24430: invalid cntlid range [0-65519] 00:14:46.530 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:46.530 { 00:14:46.530 "nqn": "nqn.2016-06.io.spdk:cnode24430", 00:14:46.530 "min_cntlid": 0, 00:14:46.530 "method": "nvmf_create_subsystem", 00:14:46.530 "req_id": 1 00:14:46.530 } 00:14:46.530 Got JSON-RPC error response 00:14:46.530 response: 00:14:46.530 { 00:14:46.530 "code": -32602, 00:14:46.530 "message": "Invalid cntlid range [0-65519]" 00:14:46.530 }' 00:14:46.530 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:46.530 { 00:14:46.530 "nqn": "nqn.2016-06.io.spdk:cnode24430", 00:14:46.530 "min_cntlid": 0, 00:14:46.530 "method": "nvmf_create_subsystem", 00:14:46.530 "req_id": 1 00:14:46.530 } 00:14:46.530 Got JSON-RPC error response 00:14:46.530 response: 00:14:46.530 { 00:14:46.531 "code": -32602, 00:14:46.531 "message": "Invalid cntlid range [0-65519]" 00:14:46.531 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:46.531 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9816 -i 65520 00:14:46.791 [2024-11-20 10:32:18.936892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9816: invalid cntlid range [65520-65519] 00:14:46.791 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:46.791 { 00:14:46.791 "nqn": "nqn.2016-06.io.spdk:cnode9816", 00:14:46.791 "min_cntlid": 65520, 00:14:46.791 "method": "nvmf_create_subsystem", 00:14:46.791 "req_id": 1 00:14:46.791 } 00:14:46.791 Got JSON-RPC error response 00:14:46.791 response: 00:14:46.791 { 00:14:46.791 "code": -32602, 00:14:46.791 "message": "Invalid cntlid range [65520-65519]" 00:14:46.791 }' 00:14:46.791 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:46.791 { 00:14:46.791 "nqn": "nqn.2016-06.io.spdk:cnode9816", 00:14:46.791 "min_cntlid": 65520, 00:14:46.791 "method": "nvmf_create_subsystem", 00:14:46.791 "req_id": 1 00:14:46.791 } 00:14:46.791 Got JSON-RPC error response 00:14:46.791 response: 00:14:46.791 { 00:14:46.791 "code": -32602, 00:14:46.791 "message": "Invalid cntlid range [65520-65519]" 00:14:46.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:46.791 10:32:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10967 -I 0 00:14:46.791 [2024-11-20 10:32:19.121460] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10967: invalid cntlid range [1-0] 00:14:46.791 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:46.791 { 00:14:46.791 "nqn": "nqn.2016-06.io.spdk:cnode10967", 00:14:46.791 "max_cntlid": 0, 00:14:46.791 "method": "nvmf_create_subsystem", 00:14:46.791 "req_id": 1 00:14:46.791 } 00:14:46.791 Got JSON-RPC error response 00:14:46.791 response: 00:14:46.791 { 00:14:46.791 "code": -32602, 00:14:46.791 "message": "Invalid cntlid range [1-0]" 00:14:46.791 }' 00:14:46.791 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:46.791 { 00:14:46.791 "nqn": "nqn.2016-06.io.spdk:cnode10967", 00:14:46.791 "max_cntlid": 0, 00:14:46.791 "method": "nvmf_create_subsystem", 00:14:46.791 "req_id": 1 00:14:46.791 } 00:14:46.791 Got JSON-RPC error response 00:14:46.791 response: 00:14:46.791 { 00:14:46.791 "code": -32602, 00:14:46.791 "message": "Invalid cntlid range [1-0]" 00:14:46.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:46.791 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1618 -I 65520 00:14:47.051 [2024-11-20 10:32:19.310059] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1618: invalid cntlid range [1-65520] 00:14:47.051 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:47.051 { 00:14:47.051 "nqn": "nqn.2016-06.io.spdk:cnode1618", 00:14:47.051 "max_cntlid": 65520, 00:14:47.051 "method": "nvmf_create_subsystem", 00:14:47.051 "req_id": 1 00:14:47.051 } 00:14:47.051 Got JSON-RPC error response 00:14:47.051 response: 00:14:47.051 { 00:14:47.051 "code": -32602, 00:14:47.051 "message": "Invalid cntlid range [1-65520]" 00:14:47.051 }' 00:14:47.051 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:47.051 { 00:14:47.051 "nqn": "nqn.2016-06.io.spdk:cnode1618", 00:14:47.051 "max_cntlid": 65520, 00:14:47.051 "method": "nvmf_create_subsystem", 00:14:47.051 "req_id": 1 00:14:47.051 } 00:14:47.051 Got JSON-RPC error response 00:14:47.051 response: 00:14:47.051 { 00:14:47.051 "code": -32602, 00:14:47.051 "message": "Invalid cntlid range [1-65520]" 00:14:47.051 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.051 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11653 -i 6 -I 5 00:14:47.311 [2024-11-20 10:32:19.494640] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11653: invalid cntlid range [6-5] 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:47.311 { 00:14:47.311 "nqn": "nqn.2016-06.io.spdk:cnode11653", 00:14:47.311 "min_cntlid": 6, 00:14:47.311 "max_cntlid": 5, 00:14:47.311 "method": "nvmf_create_subsystem", 00:14:47.311 "req_id": 1 00:14:47.311 } 00:14:47.311 Got JSON-RPC error response 00:14:47.311 response: 00:14:47.311 { 00:14:47.311 "code": -32602, 00:14:47.311 "message": "Invalid cntlid range [6-5]" 00:14:47.311 }' 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:47.311 { 00:14:47.311 "nqn": "nqn.2016-06.io.spdk:cnode11653", 00:14:47.311 "min_cntlid": 6, 00:14:47.311 "max_cntlid": 5, 00:14:47.311 "method": "nvmf_create_subsystem", 00:14:47.311 "req_id": 1 00:14:47.311 } 00:14:47.311 Got JSON-RPC error response 00:14:47.311 response: 00:14:47.311 { 00:14:47.311 "code": -32602, 00:14:47.311 "message": "Invalid cntlid range [6-5]" 00:14:47.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:47.311 { 00:14:47.311 "name": "foobar", 00:14:47.311 "method": "nvmf_delete_target", 00:14:47.311 "req_id": 1 00:14:47.311 } 00:14:47.311 Got JSON-RPC error response 00:14:47.311 response: 00:14:47.311 { 00:14:47.311 "code": -32602, 00:14:47.311 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:47.311 }' 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:47.311 { 00:14:47.311 "name": "foobar", 00:14:47.311 "method": "nvmf_delete_target", 00:14:47.311 "req_id": 1 00:14:47.311 } 00:14:47.311 Got JSON-RPC error response 00:14:47.311 response: 00:14:47.311 { 00:14:47.311 "code": -32602, 00:14:47.311 "message": "The specified target doesn't exist, cannot delete it." 00:14:47.311 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.311 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.312 rmmod nvme_tcp 00:14:47.312 rmmod nvme_fabrics 00:14:47.312 rmmod nvme_keyring 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1977231 ']' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1977231 ']' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977231' 00:14:47.574 killing process with pid 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1977231 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.574 10:32:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.118 00:14:50.118 real 0m14.202s 00:14:50.118 user 0m21.336s 00:14:50.118 sys 0m6.718s 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:50.118 ************************************ 00:14:50.118 END TEST nvmf_invalid 00:14:50.118 ************************************ 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.118 10:32:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.118 ************************************ 00:14:50.118 START TEST nvmf_connect_stress 00:14:50.118 ************************************ 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:50.118 * Looking for test storage... 00:14:50.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.118 --rc genhtml_branch_coverage=1 00:14:50.118 --rc genhtml_function_coverage=1 00:14:50.118 --rc genhtml_legend=1 00:14:50.118 --rc geninfo_all_blocks=1 00:14:50.118 --rc geninfo_unexecuted_blocks=1 00:14:50.118 00:14:50.118 ' 00:14:50.118 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.118 --rc genhtml_branch_coverage=1 00:14:50.118 --rc genhtml_function_coverage=1 00:14:50.118 --rc genhtml_legend=1 00:14:50.118 --rc geninfo_all_blocks=1 00:14:50.118 --rc geninfo_unexecuted_blocks=1 00:14:50.118 00:14:50.118 ' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.119 --rc genhtml_branch_coverage=1 00:14:50.119 --rc genhtml_function_coverage=1 00:14:50.119 --rc genhtml_legend=1 00:14:50.119 --rc geninfo_all_blocks=1 00:14:50.119 --rc geninfo_unexecuted_blocks=1 00:14:50.119 00:14:50.119 ' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.119 --rc genhtml_branch_coverage=1 00:14:50.119 --rc genhtml_function_coverage=1 00:14:50.119 --rc genhtml_legend=1 00:14:50.119 --rc geninfo_all_blocks=1 00:14:50.119 --rc geninfo_unexecuted_blocks=1 00:14:50.119 00:14:50.119 ' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.119 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.442 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:58.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:58.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:58.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:58.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:14:58.443 00:14:58.443 --- 10.0.0.2 ping statistics --- 00:14:58.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.443 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:14:58.443 00:14:58.443 --- 10.0.0.1 ping statistics --- 00:14:58.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.443 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.443 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1982425 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1982425 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1982425 ']' 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.444 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 [2024-11-20 10:32:29.775491] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:14:58.444 [2024-11-20 10:32:29.775562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.444 [2024-11-20 10:32:29.876120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.444 [2024-11-20 10:32:29.927507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.444 [2024-11-20 10:32:29.927557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.444 [2024-11-20 10:32:29.927566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.444 [2024-11-20 10:32:29.927573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.444 [2024-11-20 10:32:29.927580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.444 [2024-11-20 10:32:29.929622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.444 [2024-11-20 10:32:29.929785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.444 [2024-11-20 10:32:29.929786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 [2024-11-20 10:32:30.657622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 [2024-11-20 10:32:30.683385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 NULL1 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1982480 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:58.444 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:58.706 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:14:58.706 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.706 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.706 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.969 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.969 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:14:58.969 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:58.969 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.969 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.229 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.229 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:14:59.229 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.229 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.229 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.490 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.490 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:14:59.490 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.490 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.490 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:14:59.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:59.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.322 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.322 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:00.322 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.322 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.322 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.581 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.581 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:00.581 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.581 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.582 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.842 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.842 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:00.842 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.842 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.842 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.102 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.102 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:01.102 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.102 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.102 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.672 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.672 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:01.672 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.672 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.672 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.932 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.932 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:01.932 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.932 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.932 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:02.192 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.192 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.454 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.454 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:02.454 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.454 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.454 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.714 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.714 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:02.714 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.714 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.714 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.284 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.284 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:03.284 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.284 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.284 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.543 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.543 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:03.543 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.543 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.543 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.804 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.804 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:03.804 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.804 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.804 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.064 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.064 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:04.064 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.064 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.064 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.323 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.323 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:04.323 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.323 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.323 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.893 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:04.893 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.893 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.893 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.154 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.154 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:05.154 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.154 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.154 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.415 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.415 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:05.415 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.415 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.415 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.676 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.676 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:05.676 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.676 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.676 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.246 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.246 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:06.246 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.246 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.246 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.507 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.508 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:06.508 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.508 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.508 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.768 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.768 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:06.768 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.768 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.768 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.030 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.030 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:07.030 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.030 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.030 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.289 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.289 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:07.289 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.289 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.289 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.860 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.860 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:07.860 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.860 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.860 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.121 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.121 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:08.121 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.121 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.121 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.382 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:08.382 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.382 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.382 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1982480 00:15:08.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1982480) - No such process 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1982480 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.642 rmmod nvme_tcp 00:15:08.642 rmmod nvme_fabrics 00:15:08.642 rmmod nvme_keyring 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1982425 ']' 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1982425 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1982425 ']' 00:15:08.642 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1982425 00:15:08.642 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:08.642 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.642 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1982425 00:15:08.903 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:08.903 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:08.903 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1982425' 00:15:08.903 killing process with pid 1982425 00:15:08.903 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1982425 00:15:08.903 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1982425 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.904 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.448 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:11.448 00:15:11.448 real 0m21.216s 00:15:11.448 user 0m42.247s 00:15:11.448 sys 0m9.345s 00:15:11.448 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.449 ************************************ 00:15:11.449 END TEST nvmf_connect_stress 00:15:11.449 ************************************ 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.449 ************************************ 00:15:11.449 START TEST nvmf_fused_ordering 00:15:11.449 ************************************ 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:11.449 * Looking for test storage... 00:15:11.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.449 --rc genhtml_branch_coverage=1 00:15:11.449 --rc genhtml_function_coverage=1 00:15:11.449 --rc genhtml_legend=1 00:15:11.449 --rc geninfo_all_blocks=1 00:15:11.449 --rc geninfo_unexecuted_blocks=1 00:15:11.449 00:15:11.449 ' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.449 --rc genhtml_branch_coverage=1 00:15:11.449 --rc genhtml_function_coverage=1 00:15:11.449 --rc genhtml_legend=1 00:15:11.449 --rc geninfo_all_blocks=1 00:15:11.449 --rc geninfo_unexecuted_blocks=1 00:15:11.449 00:15:11.449 ' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.449 --rc genhtml_branch_coverage=1 00:15:11.449 --rc genhtml_function_coverage=1 00:15:11.449 --rc genhtml_legend=1 00:15:11.449 --rc geninfo_all_blocks=1 00:15:11.449 --rc geninfo_unexecuted_blocks=1 00:15:11.449 00:15:11.449 ' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.449 --rc genhtml_branch_coverage=1 00:15:11.449 --rc genhtml_function_coverage=1 00:15:11.449 --rc genhtml_legend=1 00:15:11.449 --rc geninfo_all_blocks=1 00:15:11.449 --rc geninfo_unexecuted_blocks=1 00:15:11.449 00:15:11.449 ' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:11.449 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:11.450 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:19.591 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:19.591 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.591 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:19.592 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:19.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.592 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:15:19.592 00:15:19.592 --- 10.0.0.2 ping statistics --- 00:15:19.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.592 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:15:19.592 00:15:19.592 --- 10.0.0.1 ping statistics --- 00:15:19.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.592 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1988816 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1988816 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1988816 ']' 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.592 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 [2024-11-20 10:32:51.203703] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:15:19.592 [2024-11-20 10:32:51.203771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.592 [2024-11-20 10:32:51.303357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.592 [2024-11-20 10:32:51.354188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.592 [2024-11-20 10:32:51.354237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.592 [2024-11-20 10:32:51.354246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.592 [2024-11-20 10:32:51.354252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.592 [2024-11-20 10:32:51.354259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.592 [2024-11-20 10:32:51.355041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.853 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.853 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:19.853 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.853 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 [2024-11-20 10:32:52.053231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 [2024-11-20 10:32:52.077504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 NULL1 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.854 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:19.854 [2024-11-20 10:32:52.146333] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:15:19.854 [2024-11-20 10:32:52.146378] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988987 ] 00:15:20.425 Attached to nqn.2016-06.io.spdk:cnode1 00:15:20.425 Namespace ID: 1 size: 1GB 00:15:20.425 fused_ordering(0) 00:15:20.425 fused_ordering(1) 00:15:20.425 fused_ordering(2) 00:15:20.425 fused_ordering(3) 00:15:20.425 fused_ordering(4) 00:15:20.425 fused_ordering(5) 00:15:20.425 fused_ordering(6) 00:15:20.425 fused_ordering(7) 00:15:20.425 fused_ordering(8) 00:15:20.425 fused_ordering(9) 00:15:20.425 fused_ordering(10) 00:15:20.425 fused_ordering(11) 00:15:20.425 fused_ordering(12) 00:15:20.425 fused_ordering(13) 00:15:20.425 fused_ordering(14) 00:15:20.425 fused_ordering(15) 00:15:20.425 fused_ordering(16) 00:15:20.425 fused_ordering(17) 00:15:20.425 fused_ordering(18) 00:15:20.425 fused_ordering(19) 00:15:20.425 fused_ordering(20) 00:15:20.425 fused_ordering(21) 00:15:20.425 fused_ordering(22) 00:15:20.425 fused_ordering(23) 00:15:20.425 fused_ordering(24) 00:15:20.425 fused_ordering(25) 00:15:20.425 fused_ordering(26) 00:15:20.425 fused_ordering(27) 00:15:20.425 fused_ordering(28) 00:15:20.425 fused_ordering(29) 00:15:20.425 fused_ordering(30) 00:15:20.425 fused_ordering(31) 00:15:20.425 fused_ordering(32) 00:15:20.425 fused_ordering(33) 00:15:20.425 fused_ordering(34) 00:15:20.425 fused_ordering(35) 00:15:20.425 fused_ordering(36) 00:15:20.425 fused_ordering(37) 00:15:20.425 fused_ordering(38) 00:15:20.425 fused_ordering(39) 00:15:20.425 fused_ordering(40) 00:15:20.425 fused_ordering(41) 00:15:20.425 fused_ordering(42) 00:15:20.425 fused_ordering(43) 00:15:20.425 fused_ordering(44) 00:15:20.425 fused_ordering(45) 00:15:20.425 fused_ordering(46) 00:15:20.425 fused_ordering(47) 00:15:20.425 fused_ordering(48) 00:15:20.425 fused_ordering(49) 00:15:20.426 fused_ordering(50) 00:15:20.426 fused_ordering(51) 00:15:20.426 fused_ordering(52) 00:15:20.426 fused_ordering(53) 00:15:20.426 fused_ordering(54) 00:15:20.426 fused_ordering(55) 00:15:20.426 fused_ordering(56) 00:15:20.426 fused_ordering(57) 00:15:20.426 fused_ordering(58) 00:15:20.426 fused_ordering(59) 00:15:20.426 fused_ordering(60) 00:15:20.426 fused_ordering(61) 00:15:20.426 fused_ordering(62) 00:15:20.426 fused_ordering(63) 00:15:20.426 fused_ordering(64) 00:15:20.426 fused_ordering(65) 00:15:20.426 fused_ordering(66) 00:15:20.426 fused_ordering(67) 00:15:20.426 fused_ordering(68) 00:15:20.426 fused_ordering(69) 00:15:20.426 fused_ordering(70) 00:15:20.426 fused_ordering(71) 00:15:20.426 fused_ordering(72) 00:15:20.426 fused_ordering(73) 00:15:20.426 fused_ordering(74) 00:15:20.426 fused_ordering(75) 00:15:20.426 fused_ordering(76) 00:15:20.426 fused_ordering(77) 00:15:20.426 fused_ordering(78) 00:15:20.426 fused_ordering(79) 00:15:20.426 fused_ordering(80) 00:15:20.426 fused_ordering(81) 00:15:20.426 fused_ordering(82) 00:15:20.426 fused_ordering(83) 00:15:20.426 fused_ordering(84) 00:15:20.426 fused_ordering(85) 00:15:20.426 fused_ordering(86) 00:15:20.426 fused_ordering(87) 00:15:20.426 fused_ordering(88) 00:15:20.426 fused_ordering(89) 00:15:20.426 fused_ordering(90) 00:15:20.426 fused_ordering(91) 00:15:20.426 fused_ordering(92) 00:15:20.426 fused_ordering(93) 00:15:20.426 fused_ordering(94) 00:15:20.426 fused_ordering(95) 00:15:20.426 fused_ordering(96) 00:15:20.426 fused_ordering(97) 00:15:20.426 fused_ordering(98) 00:15:20.426 fused_ordering(99) 00:15:20.426 fused_ordering(100) 00:15:20.426 fused_ordering(101) 00:15:20.426 fused_ordering(102) 00:15:20.426 fused_ordering(103) 00:15:20.426 fused_ordering(104) 00:15:20.426 fused_ordering(105) 00:15:20.426 fused_ordering(106) 00:15:20.426 fused_ordering(107) 00:15:20.426 fused_ordering(108) 00:15:20.426 fused_ordering(109) 00:15:20.426 fused_ordering(110) 00:15:20.426 fused_ordering(111) 00:15:20.426 fused_ordering(112) 00:15:20.426 fused_ordering(113) 00:15:20.426 fused_ordering(114) 00:15:20.426 fused_ordering(115) 00:15:20.426 fused_ordering(116) 00:15:20.426 fused_ordering(117) 00:15:20.426 fused_ordering(118) 00:15:20.426 fused_ordering(119) 00:15:20.426 fused_ordering(120) 00:15:20.426 fused_ordering(121) 00:15:20.426 fused_ordering(122) 00:15:20.426 fused_ordering(123) 00:15:20.426 fused_ordering(124) 00:15:20.426 fused_ordering(125) 00:15:20.426 fused_ordering(126) 00:15:20.426 fused_ordering(127) 00:15:20.426 fused_ordering(128) 00:15:20.426 fused_ordering(129) 00:15:20.426 fused_ordering(130) 00:15:20.426 fused_ordering(131) 00:15:20.426 fused_ordering(132) 00:15:20.426 fused_ordering(133) 00:15:20.426 fused_ordering(134) 00:15:20.426 fused_ordering(135) 00:15:20.426 fused_ordering(136) 00:15:20.426 fused_ordering(137) 00:15:20.426 fused_ordering(138) 00:15:20.426 fused_ordering(139) 00:15:20.426 fused_ordering(140) 00:15:20.426 fused_ordering(141) 00:15:20.426 fused_ordering(142) 00:15:20.426 fused_ordering(143) 00:15:20.426 fused_ordering(144) 00:15:20.426 fused_ordering(145) 00:15:20.426 fused_ordering(146) 00:15:20.426 fused_ordering(147) 00:15:20.426 fused_ordering(148) 00:15:20.426 fused_ordering(149) 00:15:20.426 fused_ordering(150) 00:15:20.426 fused_ordering(151) 00:15:20.426 fused_ordering(152) 00:15:20.426 fused_ordering(153) 00:15:20.426 fused_ordering(154) 00:15:20.426 fused_ordering(155) 00:15:20.426 fused_ordering(156) 00:15:20.426 fused_ordering(157) 00:15:20.426 fused_ordering(158) 00:15:20.426 fused_ordering(159) 00:15:20.426 fused_ordering(160) 00:15:20.426 fused_ordering(161) 00:15:20.426 fused_ordering(162) 00:15:20.426 fused_ordering(163) 00:15:20.426 fused_ordering(164) 00:15:20.426 fused_ordering(165) 00:15:20.426 fused_ordering(166) 00:15:20.426 fused_ordering(167) 00:15:20.426 fused_ordering(168) 00:15:20.426 fused_ordering(169) 00:15:20.426 fused_ordering(170) 00:15:20.426 fused_ordering(171) 00:15:20.426 fused_ordering(172) 00:15:20.426 fused_ordering(173) 00:15:20.426 fused_ordering(174) 00:15:20.426 fused_ordering(175) 00:15:20.426 fused_ordering(176) 00:15:20.426 fused_ordering(177) 00:15:20.426 fused_ordering(178) 00:15:20.426 fused_ordering(179) 00:15:20.426 fused_ordering(180) 00:15:20.426 fused_ordering(181) 00:15:20.426 fused_ordering(182) 00:15:20.426 fused_ordering(183) 00:15:20.426 fused_ordering(184) 00:15:20.426 fused_ordering(185) 00:15:20.426 fused_ordering(186) 00:15:20.426 fused_ordering(187) 00:15:20.426 fused_ordering(188) 00:15:20.426 fused_ordering(189) 00:15:20.426 fused_ordering(190) 00:15:20.426 fused_ordering(191) 00:15:20.426 fused_ordering(192) 00:15:20.426 fused_ordering(193) 00:15:20.426 fused_ordering(194) 00:15:20.426 fused_ordering(195) 00:15:20.426 fused_ordering(196) 00:15:20.426 fused_ordering(197) 00:15:20.426 fused_ordering(198) 00:15:20.426 fused_ordering(199) 00:15:20.426 fused_ordering(200) 00:15:20.426 fused_ordering(201) 00:15:20.426 fused_ordering(202) 00:15:20.426 fused_ordering(203) 00:15:20.426 fused_ordering(204) 00:15:20.426 fused_ordering(205) 00:15:20.686 fused_ordering(206) 00:15:20.686 fused_ordering(207) 00:15:20.686 fused_ordering(208) 00:15:20.686 fused_ordering(209) 00:15:20.686 fused_ordering(210) 00:15:20.686 fused_ordering(211) 00:15:20.686 fused_ordering(212) 00:15:20.686 fused_ordering(213) 00:15:20.686 fused_ordering(214) 00:15:20.686 fused_ordering(215) 00:15:20.687 fused_ordering(216) 00:15:20.687 fused_ordering(217) 00:15:20.687 fused_ordering(218) 00:15:20.687 fused_ordering(219) 00:15:20.687 fused_ordering(220) 00:15:20.687 fused_ordering(221) 00:15:20.687 fused_ordering(222) 00:15:20.687 fused_ordering(223) 00:15:20.687 fused_ordering(224) 00:15:20.687 fused_ordering(225) 00:15:20.687 fused_ordering(226) 00:15:20.687 fused_ordering(227) 00:15:20.687 fused_ordering(228) 00:15:20.687 fused_ordering(229) 00:15:20.687 fused_ordering(230) 00:15:20.687 fused_ordering(231) 00:15:20.687 fused_ordering(232) 00:15:20.687 fused_ordering(233) 00:15:20.687 fused_ordering(234) 00:15:20.687 fused_ordering(235) 00:15:20.687 fused_ordering(236) 00:15:20.687 fused_ordering(237) 00:15:20.687 fused_ordering(238) 00:15:20.687 fused_ordering(239) 00:15:20.687 fused_ordering(240) 00:15:20.687 fused_ordering(241) 00:15:20.687 fused_ordering(242) 00:15:20.687 fused_ordering(243) 00:15:20.687 fused_ordering(244) 00:15:20.687 fused_ordering(245) 00:15:20.687 fused_ordering(246) 00:15:20.687 fused_ordering(247) 00:15:20.687 fused_ordering(248) 00:15:20.687 fused_ordering(249) 00:15:20.687 fused_ordering(250) 00:15:20.687 fused_ordering(251) 00:15:20.687 fused_ordering(252) 00:15:20.687 fused_ordering(253) 00:15:20.687 fused_ordering(254) 00:15:20.687 fused_ordering(255) 00:15:20.687 fused_ordering(256) 00:15:20.687 fused_ordering(257) 00:15:20.687 fused_ordering(258) 00:15:20.687 fused_ordering(259) 00:15:20.687 fused_ordering(260) 00:15:20.687 fused_ordering(261) 00:15:20.687 fused_ordering(262) 00:15:20.687 fused_ordering(263) 00:15:20.687 fused_ordering(264) 00:15:20.687 fused_ordering(265) 00:15:20.687 fused_ordering(266) 00:15:20.687 fused_ordering(267) 00:15:20.687 fused_ordering(268) 00:15:20.687 fused_ordering(269) 00:15:20.687 fused_ordering(270) 00:15:20.687 fused_ordering(271) 00:15:20.687 fused_ordering(272) 00:15:20.687 fused_ordering(273) 00:15:20.687 fused_ordering(274) 00:15:20.687 fused_ordering(275) 00:15:20.687 fused_ordering(276) 00:15:20.687 fused_ordering(277) 00:15:20.687 fused_ordering(278) 00:15:20.687 fused_ordering(279) 00:15:20.687 fused_ordering(280) 00:15:20.687 fused_ordering(281) 00:15:20.687 fused_ordering(282) 00:15:20.687 fused_ordering(283) 00:15:20.687 fused_ordering(284) 00:15:20.687 fused_ordering(285) 00:15:20.687 fused_ordering(286) 00:15:20.687 fused_ordering(287) 00:15:20.687 fused_ordering(288) 00:15:20.687 fused_ordering(289) 00:15:20.687 fused_ordering(290) 00:15:20.687 fused_ordering(291) 00:15:20.687 fused_ordering(292) 00:15:20.687 fused_ordering(293) 00:15:20.687 fused_ordering(294) 00:15:20.687 fused_ordering(295) 00:15:20.687 fused_ordering(296) 00:15:20.687 fused_ordering(297) 00:15:20.687 fused_ordering(298) 00:15:20.687 fused_ordering(299) 00:15:20.687 fused_ordering(300) 00:15:20.687 fused_ordering(301) 00:15:20.687 fused_ordering(302) 00:15:20.687 fused_ordering(303) 00:15:20.687 fused_ordering(304) 00:15:20.687 fused_ordering(305) 00:15:20.687 fused_ordering(306) 00:15:20.687 fused_ordering(307) 00:15:20.687 fused_ordering(308) 00:15:20.687 fused_ordering(309) 00:15:20.687 fused_ordering(310) 00:15:20.687 fused_ordering(311) 00:15:20.687 fused_ordering(312) 00:15:20.687 fused_ordering(313) 00:15:20.687 fused_ordering(314) 00:15:20.687 fused_ordering(315) 00:15:20.687 fused_ordering(316) 00:15:20.687 fused_ordering(317) 00:15:20.687 fused_ordering(318) 00:15:20.687 fused_ordering(319) 00:15:20.687 fused_ordering(320) 00:15:20.687 fused_ordering(321) 00:15:20.687 fused_ordering(322) 00:15:20.687 fused_ordering(323) 00:15:20.687 fused_ordering(324) 00:15:20.687 fused_ordering(325) 00:15:20.687 fused_ordering(326) 00:15:20.687 fused_ordering(327) 00:15:20.687 fused_ordering(328) 00:15:20.687 fused_ordering(329) 00:15:20.687 fused_ordering(330) 00:15:20.687 fused_ordering(331) 00:15:20.687 fused_ordering(332) 00:15:20.687 fused_ordering(333) 00:15:20.687 fused_ordering(334) 00:15:20.687 fused_ordering(335) 00:15:20.687 fused_ordering(336) 00:15:20.687 fused_ordering(337) 00:15:20.687 fused_ordering(338) 00:15:20.687 fused_ordering(339) 00:15:20.687 fused_ordering(340) 00:15:20.687 fused_ordering(341) 00:15:20.687 fused_ordering(342) 00:15:20.687 fused_ordering(343) 00:15:20.687 fused_ordering(344) 00:15:20.687 fused_ordering(345) 00:15:20.687 fused_ordering(346) 00:15:20.687 fused_ordering(347) 00:15:20.687 fused_ordering(348) 00:15:20.687 fused_ordering(349) 00:15:20.687 fused_ordering(350) 00:15:20.687 fused_ordering(351) 00:15:20.687 fused_ordering(352) 00:15:20.687 fused_ordering(353) 00:15:20.687 fused_ordering(354) 00:15:20.687 fused_ordering(355) 00:15:20.687 fused_ordering(356) 00:15:20.687 fused_ordering(357) 00:15:20.687 fused_ordering(358) 00:15:20.687 fused_ordering(359) 00:15:20.687 fused_ordering(360) 00:15:20.687 fused_ordering(361) 00:15:20.687 fused_ordering(362) 00:15:20.687 fused_ordering(363) 00:15:20.687 fused_ordering(364) 00:15:20.687 fused_ordering(365) 00:15:20.687 fused_ordering(366) 00:15:20.687 fused_ordering(367) 00:15:20.687 fused_ordering(368) 00:15:20.687 fused_ordering(369) 00:15:20.687 fused_ordering(370) 00:15:20.687 fused_ordering(371) 00:15:20.687 fused_ordering(372) 00:15:20.687 fused_ordering(373) 00:15:20.687 fused_ordering(374) 00:15:20.687 fused_ordering(375) 00:15:20.687 fused_ordering(376) 00:15:20.687 fused_ordering(377) 00:15:20.687 fused_ordering(378) 00:15:20.687 fused_ordering(379) 00:15:20.687 fused_ordering(380) 00:15:20.687 fused_ordering(381) 00:15:20.687 fused_ordering(382) 00:15:20.687 fused_ordering(383) 00:15:20.687 fused_ordering(384) 00:15:20.687 fused_ordering(385) 00:15:20.687 fused_ordering(386) 00:15:20.687 fused_ordering(387) 00:15:20.687 fused_ordering(388) 00:15:20.687 fused_ordering(389) 00:15:20.687 fused_ordering(390) 00:15:20.687 fused_ordering(391) 00:15:20.687 fused_ordering(392) 00:15:20.687 fused_ordering(393) 00:15:20.687 fused_ordering(394) 00:15:20.687 fused_ordering(395) 00:15:20.687 fused_ordering(396) 00:15:20.687 fused_ordering(397) 00:15:20.687 fused_ordering(398) 00:15:20.687 fused_ordering(399) 00:15:20.687 fused_ordering(400) 00:15:20.687 fused_ordering(401) 00:15:20.687 fused_ordering(402) 00:15:20.687 fused_ordering(403) 00:15:20.687 fused_ordering(404) 00:15:20.687 fused_ordering(405) 00:15:20.687 fused_ordering(406) 00:15:20.687 fused_ordering(407) 00:15:20.687 fused_ordering(408) 00:15:20.687 fused_ordering(409) 00:15:20.687 fused_ordering(410) 00:15:21.258 fused_ordering(411) 00:15:21.258 fused_ordering(412) 00:15:21.258 fused_ordering(413) 00:15:21.258 fused_ordering(414) 00:15:21.258 fused_ordering(415) 00:15:21.258 fused_ordering(416) 00:15:21.258 fused_ordering(417) 00:15:21.258 fused_ordering(418) 00:15:21.258 fused_ordering(419) 00:15:21.258 fused_ordering(420) 00:15:21.258 fused_ordering(421) 00:15:21.258 fused_ordering(422) 00:15:21.258 fused_ordering(423) 00:15:21.258 fused_ordering(424) 00:15:21.258 fused_ordering(425) 00:15:21.258 fused_ordering(426) 00:15:21.258 fused_ordering(427) 00:15:21.258 fused_ordering(428) 00:15:21.258 fused_ordering(429) 00:15:21.258 fused_ordering(430) 00:15:21.258 fused_ordering(431) 00:15:21.258 fused_ordering(432) 00:15:21.258 fused_ordering(433) 00:15:21.258 fused_ordering(434) 00:15:21.258 fused_ordering(435) 00:15:21.258 fused_ordering(436) 00:15:21.258 fused_ordering(437) 00:15:21.258 fused_ordering(438) 00:15:21.258 fused_ordering(439) 00:15:21.258 fused_ordering(440) 00:15:21.258 fused_ordering(441) 00:15:21.258 fused_ordering(442) 00:15:21.258 fused_ordering(443) 00:15:21.258 fused_ordering(444) 00:15:21.258 fused_ordering(445) 00:15:21.258 fused_ordering(446) 00:15:21.258 fused_ordering(447) 00:15:21.258 fused_ordering(448) 00:15:21.258 fused_ordering(449) 00:15:21.258 fused_ordering(450) 00:15:21.258 fused_ordering(451) 00:15:21.258 fused_ordering(452) 00:15:21.258 fused_ordering(453) 00:15:21.258 fused_ordering(454) 00:15:21.258 fused_ordering(455) 00:15:21.258 fused_ordering(456) 00:15:21.258 fused_ordering(457) 00:15:21.258 fused_ordering(458) 00:15:21.258 fused_ordering(459) 00:15:21.258 fused_ordering(460) 00:15:21.258 fused_ordering(461) 00:15:21.258 fused_ordering(462) 00:15:21.258 fused_ordering(463) 00:15:21.258 fused_ordering(464) 00:15:21.258 fused_ordering(465) 00:15:21.258 fused_ordering(466) 00:15:21.258 fused_ordering(467) 00:15:21.258 fused_ordering(468) 00:15:21.258 fused_ordering(469) 00:15:21.258 fused_ordering(470) 00:15:21.258 fused_ordering(471) 00:15:21.258 fused_ordering(472) 00:15:21.258 fused_ordering(473) 00:15:21.258 fused_ordering(474) 00:15:21.258 fused_ordering(475) 00:15:21.258 fused_ordering(476) 00:15:21.258 fused_ordering(477) 00:15:21.258 fused_ordering(478) 00:15:21.258 fused_ordering(479) 00:15:21.258 fused_ordering(480) 00:15:21.258 fused_ordering(481) 00:15:21.258 fused_ordering(482) 00:15:21.258 fused_ordering(483) 00:15:21.258 fused_ordering(484) 00:15:21.258 fused_ordering(485) 00:15:21.258 fused_ordering(486) 00:15:21.258 fused_ordering(487) 00:15:21.258 fused_ordering(488) 00:15:21.258 fused_ordering(489) 00:15:21.258 fused_ordering(490) 00:15:21.258 fused_ordering(491) 00:15:21.258 fused_ordering(492) 00:15:21.258 fused_ordering(493) 00:15:21.258 fused_ordering(494) 00:15:21.258 fused_ordering(495) 00:15:21.258 fused_ordering(496) 00:15:21.258 fused_ordering(497) 00:15:21.258 fused_ordering(498) 00:15:21.258 fused_ordering(499) 00:15:21.258 fused_ordering(500) 00:15:21.258 fused_ordering(501) 00:15:21.258 fused_ordering(502) 00:15:21.258 fused_ordering(503) 00:15:21.258 fused_ordering(504) 00:15:21.258 fused_ordering(505) 00:15:21.258 fused_ordering(506) 00:15:21.258 fused_ordering(507) 00:15:21.258 fused_ordering(508) 00:15:21.258 fused_ordering(509) 00:15:21.258 fused_ordering(510) 00:15:21.258 fused_ordering(511) 00:15:21.258 fused_ordering(512) 00:15:21.258 fused_ordering(513) 00:15:21.258 fused_ordering(514) 00:15:21.258 fused_ordering(515) 00:15:21.258 fused_ordering(516) 00:15:21.258 fused_ordering(517) 00:15:21.258 fused_ordering(518) 00:15:21.258 fused_ordering(519) 00:15:21.258 fused_ordering(520) 00:15:21.258 fused_ordering(521) 00:15:21.258 fused_ordering(522) 00:15:21.258 fused_ordering(523) 00:15:21.258 fused_ordering(524) 00:15:21.258 fused_ordering(525) 00:15:21.258 fused_ordering(526) 00:15:21.258 fused_ordering(527) 00:15:21.258 fused_ordering(528) 00:15:21.258 fused_ordering(529) 00:15:21.258 fused_ordering(530) 00:15:21.258 fused_ordering(531) 00:15:21.258 fused_ordering(532) 00:15:21.258 fused_ordering(533) 00:15:21.258 fused_ordering(534) 00:15:21.258 fused_ordering(535) 00:15:21.258 fused_ordering(536) 00:15:21.258 fused_ordering(537) 00:15:21.258 fused_ordering(538) 00:15:21.258 fused_ordering(539) 00:15:21.258 fused_ordering(540) 00:15:21.258 fused_ordering(541) 00:15:21.258 fused_ordering(542) 00:15:21.258 fused_ordering(543) 00:15:21.258 fused_ordering(544) 00:15:21.258 fused_ordering(545) 00:15:21.258 fused_ordering(546) 00:15:21.258 fused_ordering(547) 00:15:21.258 fused_ordering(548) 00:15:21.258 fused_ordering(549) 00:15:21.258 fused_ordering(550) 00:15:21.258 fused_ordering(551) 00:15:21.258 fused_ordering(552) 00:15:21.258 fused_ordering(553) 00:15:21.258 fused_ordering(554) 00:15:21.258 fused_ordering(555) 00:15:21.258 fused_ordering(556) 00:15:21.258 fused_ordering(557) 00:15:21.258 fused_ordering(558) 00:15:21.258 fused_ordering(559) 00:15:21.258 fused_ordering(560) 00:15:21.258 fused_ordering(561) 00:15:21.258 fused_ordering(562) 00:15:21.258 fused_ordering(563) 00:15:21.258 fused_ordering(564) 00:15:21.258 fused_ordering(565) 00:15:21.258 fused_ordering(566) 00:15:21.258 fused_ordering(567) 00:15:21.258 fused_ordering(568) 00:15:21.258 fused_ordering(569) 00:15:21.258 fused_ordering(570) 00:15:21.258 fused_ordering(571) 00:15:21.258 fused_ordering(572) 00:15:21.258 fused_ordering(573) 00:15:21.258 fused_ordering(574) 00:15:21.258 fused_ordering(575) 00:15:21.258 fused_ordering(576) 00:15:21.258 fused_ordering(577) 00:15:21.258 fused_ordering(578) 00:15:21.258 fused_ordering(579) 00:15:21.259 fused_ordering(580) 00:15:21.259 fused_ordering(581) 00:15:21.259 fused_ordering(582) 00:15:21.259 fused_ordering(583) 00:15:21.259 fused_ordering(584) 00:15:21.259 fused_ordering(585) 00:15:21.259 fused_ordering(586) 00:15:21.259 fused_ordering(587) 00:15:21.259 fused_ordering(588) 00:15:21.259 fused_ordering(589) 00:15:21.259 fused_ordering(590) 00:15:21.259 fused_ordering(591) 00:15:21.259 fused_ordering(592) 00:15:21.259 fused_ordering(593) 00:15:21.259 fused_ordering(594) 00:15:21.259 fused_ordering(595) 00:15:21.259 fused_ordering(596) 00:15:21.259 fused_ordering(597) 00:15:21.259 fused_ordering(598) 00:15:21.259 fused_ordering(599) 00:15:21.259 fused_ordering(600) 00:15:21.259 fused_ordering(601) 00:15:21.259 fused_ordering(602) 00:15:21.259 fused_ordering(603) 00:15:21.259 fused_ordering(604) 00:15:21.259 fused_ordering(605) 00:15:21.259 fused_ordering(606) 00:15:21.259 fused_ordering(607) 00:15:21.259 fused_ordering(608) 00:15:21.259 fused_ordering(609) 00:15:21.259 fused_ordering(610) 00:15:21.259 fused_ordering(611) 00:15:21.259 fused_ordering(612) 00:15:21.259 fused_ordering(613) 00:15:21.259 fused_ordering(614) 00:15:21.259 fused_ordering(615) 00:15:21.830 fused_ordering(616) 00:15:21.830 fused_ordering(617) 00:15:21.830 fused_ordering(618) 00:15:21.830 fused_ordering(619) 00:15:21.830 fused_ordering(620) 00:15:21.830 fused_ordering(621) 00:15:21.830 fused_ordering(622) 00:15:21.830 fused_ordering(623) 00:15:21.830 fused_ordering(624) 00:15:21.830 fused_ordering(625) 00:15:21.830 fused_ordering(626) 00:15:21.830 fused_ordering(627) 00:15:21.830 fused_ordering(628) 00:15:21.830 fused_ordering(629) 00:15:21.830 fused_ordering(630) 00:15:21.830 fused_ordering(631) 00:15:21.830 fused_ordering(632) 00:15:21.830 fused_ordering(633) 00:15:21.830 fused_ordering(634) 00:15:21.830 fused_ordering(635) 00:15:21.830 fused_ordering(636) 00:15:21.830 fused_ordering(637) 00:15:21.830 fused_ordering(638) 00:15:21.830 fused_ordering(639) 00:15:21.830 fused_ordering(640) 00:15:21.830 fused_ordering(641) 00:15:21.830 fused_ordering(642) 00:15:21.830 fused_ordering(643) 00:15:21.830 fused_ordering(644) 00:15:21.830 fused_ordering(645) 00:15:21.830 fused_ordering(646) 00:15:21.830 fused_ordering(647) 00:15:21.830 fused_ordering(648) 00:15:21.830 fused_ordering(649) 00:15:21.830 fused_ordering(650) 00:15:21.830 fused_ordering(651) 00:15:21.830 fused_ordering(652) 00:15:21.830 fused_ordering(653) 00:15:21.830 fused_ordering(654) 00:15:21.830 fused_ordering(655) 00:15:21.830 fused_ordering(656) 00:15:21.830 fused_ordering(657) 00:15:21.830 fused_ordering(658) 00:15:21.830 fused_ordering(659) 00:15:21.830 fused_ordering(660) 00:15:21.830 fused_ordering(661) 00:15:21.830 fused_ordering(662) 00:15:21.830 fused_ordering(663) 00:15:21.830 fused_ordering(664) 00:15:21.830 fused_ordering(665) 00:15:21.830 fused_ordering(666) 00:15:21.830 fused_ordering(667) 00:15:21.830 fused_ordering(668) 00:15:21.830 fused_ordering(669) 00:15:21.830 fused_ordering(670) 00:15:21.830 fused_ordering(671) 00:15:21.830 fused_ordering(672) 00:15:21.830 fused_ordering(673) 00:15:21.830 fused_ordering(674) 00:15:21.830 fused_ordering(675) 00:15:21.830 fused_ordering(676) 00:15:21.830 fused_ordering(677) 00:15:21.830 fused_ordering(678) 00:15:21.830 fused_ordering(679) 00:15:21.830 fused_ordering(680) 00:15:21.830 fused_ordering(681) 00:15:21.830 fused_ordering(682) 00:15:21.830 fused_ordering(683) 00:15:21.830 fused_ordering(684) 00:15:21.830 fused_ordering(685) 00:15:21.830 fused_ordering(686) 00:15:21.830 fused_ordering(687) 00:15:21.830 fused_ordering(688) 00:15:21.830 fused_ordering(689) 00:15:21.830 fused_ordering(690) 00:15:21.830 fused_ordering(691) 00:15:21.830 fused_ordering(692) 00:15:21.830 fused_ordering(693) 00:15:21.830 fused_ordering(694) 00:15:21.830 fused_ordering(695) 00:15:21.830 fused_ordering(696) 00:15:21.830 fused_ordering(697) 00:15:21.830 fused_ordering(698) 00:15:21.830 fused_ordering(699) 00:15:21.830 fused_ordering(700) 00:15:21.830 fused_ordering(701) 00:15:21.830 fused_ordering(702) 00:15:21.830 fused_ordering(703) 00:15:21.830 fused_ordering(704) 00:15:21.830 fused_ordering(705) 00:15:21.830 fused_ordering(706) 00:15:21.830 fused_ordering(707) 00:15:21.830 fused_ordering(708) 00:15:21.830 fused_ordering(709) 00:15:21.830 fused_ordering(710) 00:15:21.830 fused_ordering(711) 00:15:21.830 fused_ordering(712) 00:15:21.830 fused_ordering(713) 00:15:21.830 fused_ordering(714) 00:15:21.830 fused_ordering(715) 00:15:21.830 fused_ordering(716) 00:15:21.830 fused_ordering(717) 00:15:21.830 fused_ordering(718) 00:15:21.830 fused_ordering(719) 00:15:21.830 fused_ordering(720) 00:15:21.830 fused_ordering(721) 00:15:21.830 fused_ordering(722) 00:15:21.830 fused_ordering(723) 00:15:21.830 fused_ordering(724) 00:15:21.830 fused_ordering(725) 00:15:21.830 fused_ordering(726) 00:15:21.830 fused_ordering(727) 00:15:21.830 fused_ordering(728) 00:15:21.830 fused_ordering(729) 00:15:21.830 fused_ordering(730) 00:15:21.830 fused_ordering(731) 00:15:21.830 fused_ordering(732) 00:15:21.830 fused_ordering(733) 00:15:21.830 fused_ordering(734) 00:15:21.830 fused_ordering(735) 00:15:21.830 fused_ordering(736) 00:15:21.830 fused_ordering(737) 00:15:21.830 fused_ordering(738) 00:15:21.830 fused_ordering(739) 00:15:21.830 fused_ordering(740) 00:15:21.830 fused_ordering(741) 00:15:21.830 fused_ordering(742) 00:15:21.830 fused_ordering(743) 00:15:21.830 fused_ordering(744) 00:15:21.830 fused_ordering(745) 00:15:21.830 fused_ordering(746) 00:15:21.830 fused_ordering(747) 00:15:21.830 fused_ordering(748) 00:15:21.830 fused_ordering(749) 00:15:21.830 fused_ordering(750) 00:15:21.830 fused_ordering(751) 00:15:21.830 fused_ordering(752) 00:15:21.830 fused_ordering(753) 00:15:21.830 fused_ordering(754) 00:15:21.830 fused_ordering(755) 00:15:21.830 fused_ordering(756) 00:15:21.830 fused_ordering(757) 00:15:21.830 fused_ordering(758) 00:15:21.830 fused_ordering(759) 00:15:21.830 fused_ordering(760) 00:15:21.830 fused_ordering(761) 00:15:21.830 fused_ordering(762) 00:15:21.830 fused_ordering(763) 00:15:21.830 fused_ordering(764) 00:15:21.830 fused_ordering(765) 00:15:21.830 fused_ordering(766) 00:15:21.830 fused_ordering(767) 00:15:21.830 fused_ordering(768) 00:15:21.830 fused_ordering(769) 00:15:21.830 fused_ordering(770) 00:15:21.830 fused_ordering(771) 00:15:21.830 fused_ordering(772) 00:15:21.830 fused_ordering(773) 00:15:21.830 fused_ordering(774) 00:15:21.830 fused_ordering(775) 00:15:21.830 fused_ordering(776) 00:15:21.830 fused_ordering(777) 00:15:21.830 fused_ordering(778) 00:15:21.830 fused_ordering(779) 00:15:21.830 fused_ordering(780) 00:15:21.830 fused_ordering(781) 00:15:21.830 fused_ordering(782) 00:15:21.830 fused_ordering(783) 00:15:21.830 fused_ordering(784) 00:15:21.830 fused_ordering(785) 00:15:21.830 fused_ordering(786) 00:15:21.830 fused_ordering(787) 00:15:21.830 fused_ordering(788) 00:15:21.830 fused_ordering(789) 00:15:21.830 fused_ordering(790) 00:15:21.830 fused_ordering(791) 00:15:21.830 fused_ordering(792) 00:15:21.830 fused_ordering(793) 00:15:21.830 fused_ordering(794) 00:15:21.830 fused_ordering(795) 00:15:21.830 fused_ordering(796) 00:15:21.830 fused_ordering(797) 00:15:21.830 fused_ordering(798) 00:15:21.830 fused_ordering(799) 00:15:21.830 fused_ordering(800) 00:15:21.830 fused_ordering(801) 00:15:21.830 fused_ordering(802) 00:15:21.830 fused_ordering(803) 00:15:21.830 fused_ordering(804) 00:15:21.830 fused_ordering(805) 00:15:21.830 fused_ordering(806) 00:15:21.830 fused_ordering(807) 00:15:21.830 fused_ordering(808) 00:15:21.830 fused_ordering(809) 00:15:21.830 fused_ordering(810) 00:15:21.830 fused_ordering(811) 00:15:21.830 fused_ordering(812) 00:15:21.830 fused_ordering(813) 00:15:21.830 fused_ordering(814) 00:15:21.830 fused_ordering(815) 00:15:21.830 fused_ordering(816) 00:15:21.830 fused_ordering(817) 00:15:21.830 fused_ordering(818) 00:15:21.830 fused_ordering(819) 00:15:21.830 fused_ordering(820) 00:15:22.400 fused_ordering(821) 00:15:22.400 fused_ordering(822) 00:15:22.400 fused_ordering(823) 00:15:22.400 fused_ordering(824) 00:15:22.400 fused_ordering(825) 00:15:22.400 fused_ordering(826) 00:15:22.400 fused_ordering(827) 00:15:22.400 fused_ordering(828) 00:15:22.400 fused_ordering(829) 00:15:22.400 fused_ordering(830) 00:15:22.400 fused_ordering(831) 00:15:22.400 fused_ordering(832) 00:15:22.400 fused_ordering(833) 00:15:22.400 fused_ordering(834) 00:15:22.400 fused_ordering(835) 00:15:22.400 fused_ordering(836) 00:15:22.400 fused_ordering(837) 00:15:22.400 fused_ordering(838) 00:15:22.400 fused_ordering(839) 00:15:22.400 fused_ordering(840) 00:15:22.400 fused_ordering(841) 00:15:22.400 fused_ordering(842) 00:15:22.400 fused_ordering(843) 00:15:22.400 fused_ordering(844) 00:15:22.400 fused_ordering(845) 00:15:22.400 fused_ordering(846) 00:15:22.400 fused_ordering(847) 00:15:22.400 fused_ordering(848) 00:15:22.400 fused_ordering(849) 00:15:22.400 fused_ordering(850) 00:15:22.400 fused_ordering(851) 00:15:22.400 fused_ordering(852) 00:15:22.400 fused_ordering(853) 00:15:22.400 fused_ordering(854) 00:15:22.400 fused_ordering(855) 00:15:22.400 fused_ordering(856) 00:15:22.400 fused_ordering(857) 00:15:22.400 fused_ordering(858) 00:15:22.400 fused_ordering(859) 00:15:22.400 fused_ordering(860) 00:15:22.400 fused_ordering(861) 00:15:22.400 fused_ordering(862) 00:15:22.400 fused_ordering(863) 00:15:22.400 fused_ordering(864) 00:15:22.400 fused_ordering(865) 00:15:22.400 fused_ordering(866) 00:15:22.400 fused_ordering(867) 00:15:22.400 fused_ordering(868) 00:15:22.400 fused_ordering(869) 00:15:22.400 fused_ordering(870) 00:15:22.400 fused_ordering(871) 00:15:22.400 fused_ordering(872) 00:15:22.400 fused_ordering(873) 00:15:22.400 fused_ordering(874) 00:15:22.400 fused_ordering(875) 00:15:22.400 fused_ordering(876) 00:15:22.400 fused_ordering(877) 00:15:22.400 fused_ordering(878) 00:15:22.400 fused_ordering(879) 00:15:22.400 fused_ordering(880) 00:15:22.400 fused_ordering(881) 00:15:22.400 fused_ordering(882) 00:15:22.400 fused_ordering(883) 00:15:22.400 fused_ordering(884) 00:15:22.400 fused_ordering(885) 00:15:22.400 fused_ordering(886) 00:15:22.400 fused_ordering(887) 00:15:22.400 fused_ordering(888) 00:15:22.400 fused_ordering(889) 00:15:22.400 fused_ordering(890) 00:15:22.400 fused_ordering(891) 00:15:22.400 fused_ordering(892) 00:15:22.400 fused_ordering(893) 00:15:22.400 fused_ordering(894) 00:15:22.400 fused_ordering(895) 00:15:22.400 fused_ordering(896) 00:15:22.400 fused_ordering(897) 00:15:22.400 fused_ordering(898) 00:15:22.400 fused_ordering(899) 00:15:22.400 fused_ordering(900) 00:15:22.400 fused_ordering(901) 00:15:22.400 fused_ordering(902) 00:15:22.400 fused_ordering(903) 00:15:22.400 fused_ordering(904) 00:15:22.400 fused_ordering(905) 00:15:22.400 fused_ordering(906) 00:15:22.400 fused_ordering(907) 00:15:22.400 fused_ordering(908) 00:15:22.400 fused_ordering(909) 00:15:22.400 fused_ordering(910) 00:15:22.400 fused_ordering(911) 00:15:22.400 fused_ordering(912) 00:15:22.400 fused_ordering(913) 00:15:22.400 fused_ordering(914) 00:15:22.400 fused_ordering(915) 00:15:22.400 fused_ordering(916) 00:15:22.400 fused_ordering(917) 00:15:22.400 fused_ordering(918) 00:15:22.400 fused_ordering(919) 00:15:22.400 fused_ordering(920) 00:15:22.400 fused_ordering(921) 00:15:22.400 fused_ordering(922) 00:15:22.400 fused_ordering(923) 00:15:22.400 fused_ordering(924) 00:15:22.400 fused_ordering(925) 00:15:22.400 fused_ordering(926) 00:15:22.400 fused_ordering(927) 00:15:22.400 fused_ordering(928) 00:15:22.400 fused_ordering(929) 00:15:22.400 fused_ordering(930) 00:15:22.400 fused_ordering(931) 00:15:22.400 fused_ordering(932) 00:15:22.400 fused_ordering(933) 00:15:22.400 fused_ordering(934) 00:15:22.400 fused_ordering(935) 00:15:22.400 fused_ordering(936) 00:15:22.400 fused_ordering(937) 00:15:22.400 fused_ordering(938) 00:15:22.400 fused_ordering(939) 00:15:22.400 fused_ordering(940) 00:15:22.400 fused_ordering(941) 00:15:22.400 fused_ordering(942) 00:15:22.400 fused_ordering(943) 00:15:22.400 fused_ordering(944) 00:15:22.400 fused_ordering(945) 00:15:22.400 fused_ordering(946) 00:15:22.400 fused_ordering(947) 00:15:22.400 fused_ordering(948) 00:15:22.400 fused_ordering(949) 00:15:22.400 fused_ordering(950) 00:15:22.400 fused_ordering(951) 00:15:22.400 fused_ordering(952) 00:15:22.401 fused_ordering(953) 00:15:22.401 fused_ordering(954) 00:15:22.401 fused_ordering(955) 00:15:22.401 fused_ordering(956) 00:15:22.401 fused_ordering(957) 00:15:22.401 fused_ordering(958) 00:15:22.401 fused_ordering(959) 00:15:22.401 fused_ordering(960) 00:15:22.401 fused_ordering(961) 00:15:22.401 fused_ordering(962) 00:15:22.401 fused_ordering(963) 00:15:22.401 fused_ordering(964) 00:15:22.401 fused_ordering(965) 00:15:22.401 fused_ordering(966) 00:15:22.401 fused_ordering(967) 00:15:22.401 fused_ordering(968) 00:15:22.401 fused_ordering(969) 00:15:22.401 fused_ordering(970) 00:15:22.401 fused_ordering(971) 00:15:22.401 fused_ordering(972) 00:15:22.401 fused_ordering(973) 00:15:22.401 fused_ordering(974) 00:15:22.401 fused_ordering(975) 00:15:22.401 fused_ordering(976) 00:15:22.401 fused_ordering(977) 00:15:22.401 fused_ordering(978) 00:15:22.401 fused_ordering(979) 00:15:22.401 fused_ordering(980) 00:15:22.401 fused_ordering(981) 00:15:22.401 fused_ordering(982) 00:15:22.401 fused_ordering(983) 00:15:22.401 fused_ordering(984) 00:15:22.401 fused_ordering(985) 00:15:22.401 fused_ordering(986) 00:15:22.401 fused_ordering(987) 00:15:22.401 fused_ordering(988) 00:15:22.401 fused_ordering(989) 00:15:22.401 fused_ordering(990) 00:15:22.401 fused_ordering(991) 00:15:22.401 fused_ordering(992) 00:15:22.401 fused_ordering(993) 00:15:22.401 fused_ordering(994) 00:15:22.401 fused_ordering(995) 00:15:22.401 fused_ordering(996) 00:15:22.401 fused_ordering(997) 00:15:22.401 fused_ordering(998) 00:15:22.401 fused_ordering(999) 00:15:22.401 fused_ordering(1000) 00:15:22.401 fused_ordering(1001) 00:15:22.401 fused_ordering(1002) 00:15:22.401 fused_ordering(1003) 00:15:22.401 fused_ordering(1004) 00:15:22.401 fused_ordering(1005) 00:15:22.401 fused_ordering(1006) 00:15:22.401 fused_ordering(1007) 00:15:22.401 fused_ordering(1008) 00:15:22.401 fused_ordering(1009) 00:15:22.401 fused_ordering(1010) 00:15:22.401 fused_ordering(1011) 00:15:22.401 fused_ordering(1012) 00:15:22.401 fused_ordering(1013) 00:15:22.401 fused_ordering(1014) 00:15:22.401 fused_ordering(1015) 00:15:22.401 fused_ordering(1016) 00:15:22.401 fused_ordering(1017) 00:15:22.401 fused_ordering(1018) 00:15:22.401 fused_ordering(1019) 00:15:22.401 fused_ordering(1020) 00:15:22.401 fused_ordering(1021) 00:15:22.401 fused_ordering(1022) 00:15:22.401 fused_ordering(1023) 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.401 rmmod nvme_tcp 00:15:22.401 rmmod nvme_fabrics 00:15:22.401 rmmod nvme_keyring 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1988816 ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1988816 ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988816' 00:15:22.401 killing process with pid 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1988816 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.401 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:24.945 00:15:24.945 real 0m13.487s 00:15:24.945 user 0m7.001s 00:15:24.945 sys 0m7.325s 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.945 ************************************ 00:15:24.945 END TEST nvmf_fused_ordering 00:15:24.945 ************************************ 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.945 ************************************ 00:15:24.945 START TEST nvmf_ns_masking 00:15:24.945 ************************************ 00:15:24.945 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:24.945 * Looking for test storage... 00:15:24.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.945 --rc genhtml_branch_coverage=1 00:15:24.945 --rc genhtml_function_coverage=1 00:15:24.945 --rc genhtml_legend=1 00:15:24.945 --rc geninfo_all_blocks=1 00:15:24.945 --rc geninfo_unexecuted_blocks=1 00:15:24.945 00:15:24.945 ' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.945 --rc genhtml_branch_coverage=1 00:15:24.945 --rc genhtml_function_coverage=1 00:15:24.945 --rc genhtml_legend=1 00:15:24.945 --rc geninfo_all_blocks=1 00:15:24.945 --rc geninfo_unexecuted_blocks=1 00:15:24.945 00:15:24.945 ' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.945 --rc genhtml_branch_coverage=1 00:15:24.945 --rc genhtml_function_coverage=1 00:15:24.945 --rc genhtml_legend=1 00:15:24.945 --rc geninfo_all_blocks=1 00:15:24.945 --rc geninfo_unexecuted_blocks=1 00:15:24.945 00:15:24.945 ' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.945 --rc genhtml_branch_coverage=1 00:15:24.945 --rc genhtml_function_coverage=1 00:15:24.945 --rc genhtml_legend=1 00:15:24.945 --rc geninfo_all_blocks=1 00:15:24.945 --rc geninfo_unexecuted_blocks=1 00:15:24.945 00:15:24.945 ' 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.945 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e20dd0b0-b075-4c85-922e-afc90c9fdbd4 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=55b3fcc8-cc99-43c6-a8ca-95daf0bd4d8b 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=025d26f0-8149-42f3-8ebb-6f56a7addccd 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:24.946 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:33.082 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:33.083 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:33.083 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:33.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:33.083 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:33.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:15:33.083 00:15:33.083 --- 10.0.0.2 ping statistics --- 00:15:33.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.083 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:15:33.083 00:15:33.083 --- 10.0.0.1 ping statistics --- 00:15:33.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.083 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1993795 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1993795 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1993795 ']' 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.083 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.083 [2024-11-20 10:33:04.730477] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:15:33.083 [2024-11-20 10:33:04.730543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.083 [2024-11-20 10:33:04.830450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.083 [2024-11-20 10:33:04.881410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.083 [2024-11-20 10:33:04.881458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.083 [2024-11-20 10:33:04.881466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.084 [2024-11-20 10:33:04.881474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.084 [2024-11-20 10:33:04.881486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.084 [2024-11-20 10:33:04.882307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.344 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.604 [2024-11-20 10:33:05.768537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.604 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:33.604 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:33.604 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:33.865 Malloc1 00:15:33.865 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:33.865 Malloc2 00:15:33.865 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:34.126 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:34.387 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.648 [2024-11-20 10:33:06.802115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 025d26f0-8149-42f3-8ebb-6f56a7addccd -a 10.0.0.2 -s 4420 -i 4 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:34.648 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.204 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.204 [ 0]:0x1 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f8e1aa3d9e455ca2c0e0af8fa69dbb 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f8e1aa3d9e455ca2c0e0af8fa69dbb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:37.204 [ 0]:0x1 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f8e1aa3d9e455ca2c0e0af8fa69dbb 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f8e1aa3d9e455ca2c0e0af8fa69dbb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:37.204 [ 1]:0x2 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.204 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.465 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:37.726 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:37.726 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 025d26f0-8149-42f3-8ebb-6f56a7addccd -a 10.0.0.2 -s 4420 -i 4 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:37.726 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:40.270 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.271 [ 0]:0x2 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.271 [ 0]:0x1 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f8e1aa3d9e455ca2c0e0af8fa69dbb 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f8e1aa3d9e455ca2c0e0af8fa69dbb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.271 [ 1]:0x2 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.271 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.531 [ 0]:0x2 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.531 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.792 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:40.792 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.792 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:40.792 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.792 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:40.792 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:40.792 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 025d26f0-8149-42f3-8ebb-6f56a7addccd -a 10.0.0.2 -s 4420 -i 4 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:41.053 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.598 [ 0]:0x1 00:15:43.598 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f8e1aa3d9e455ca2c0e0af8fa69dbb 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f8e1aa3d9e455ca2c0e0af8fa69dbb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:43.599 [ 1]:0x2 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:43.599 [ 0]:0x2 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.599 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.859 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.859 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.859 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.859 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:43.859 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:43.859 [2024-11-20 10:33:16.155887] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:43.860 request: 00:15:43.860 { 00:15:43.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.860 "nsid": 2, 00:15:43.860 "host": "nqn.2016-06.io.spdk:host1", 00:15:43.860 "method": "nvmf_ns_remove_host", 00:15:43.860 "req_id": 1 00:15:43.860 } 00:15:43.860 Got JSON-RPC error response 00:15:43.860 response: 00:15:43.860 { 00:15:43.860 "code": -32602, 00:15:43.860 "message": "Invalid parameters" 00:15:43.860 } 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.860 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.120 [ 0]:0x2 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1caa37e2a4cb400d94df275458d7fd21 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1caa37e2a4cb400d94df275458d7fd21 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1996581 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1996581 /var/tmp/host.sock 00:15:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1996581 ']' 00:15:44.121 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:44.121 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.121 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:44.121 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.121 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:44.121 [2024-11-20 10:33:16.417726] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:15:44.121 [2024-11-20 10:33:16.417778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996581 ] 00:15:44.381 [2024-11-20 10:33:16.505758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.381 [2024-11-20 10:33:16.541131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.951 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.951 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:44.951 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.211 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:45.211 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e20dd0b0-b075-4c85-922e-afc90c9fdbd4 00:15:45.211 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:45.211 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E20DD0B0B0754C85922EAFC90C9FDBD4 -i 00:15:45.473 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 55b3fcc8-cc99-43c6-a8ca-95daf0bd4d8b 00:15:45.473 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:45.473 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 55B3FCC8CC9943C6A8CA95DAF0BD4D8B -i 00:15:45.733 10:33:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.993 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:45.993 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:45.993 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:46.253 nvme0n1 00:15:46.253 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.253 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:46.822 nvme1n2 00:15:46.822 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:46.822 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:46.822 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:46.822 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:46.822 10:33:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:46.822 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:46.822 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:46.822 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:46.822 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:47.081 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e20dd0b0-b075-4c85-922e-afc90c9fdbd4 == \e\2\0\d\d\0\b\0\-\b\0\7\5\-\4\c\8\5\-\9\2\2\e\-\a\f\c\9\0\c\9\f\d\b\d\4 ]] 00:15:47.081 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:47.081 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:47.081 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:47.341 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 55b3fcc8-cc99-43c6-a8ca-95daf0bd4d8b == \5\5\b\3\f\c\c\8\-\c\c\9\9\-\4\3\c\6\-\a\8\c\a\-\9\5\d\a\f\0\b\d\4\d\8\b ]] 00:15:47.341 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.341 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e20dd0b0-b075-4c85-922e-afc90c9fdbd4 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E20DD0B0B0754C85922EAFC90C9FDBD4 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E20DD0B0B0754C85922EAFC90C9FDBD4 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:47.601 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E20DD0B0B0754C85922EAFC90C9FDBD4 00:15:47.861 [2024-11-20 10:33:20.018063] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:47.861 [2024-11-20 10:33:20.018089] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:47.861 [2024-11-20 10:33:20.018096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.861 request: 00:15:47.861 { 00:15:47.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.861 "namespace": { 00:15:47.861 "bdev_name": "invalid", 00:15:47.861 "nsid": 1, 00:15:47.861 "nguid": "E20DD0B0B0754C85922EAFC90C9FDBD4", 00:15:47.861 "no_auto_visible": false 00:15:47.861 }, 00:15:47.861 "method": "nvmf_subsystem_add_ns", 00:15:47.861 "req_id": 1 00:15:47.861 } 00:15:47.861 Got JSON-RPC error response 00:15:47.861 response: 00:15:47.861 { 00:15:47.861 "code": -32602, 00:15:47.861 "message": "Invalid parameters" 00:15:47.861 } 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e20dd0b0-b075-4c85-922e-afc90c9fdbd4 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E20DD0B0B0754C85922EAFC90C9FDBD4 -i 00:15:47.861 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1996581 ']' 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1996581' 00:15:50.440 killing process with pid 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1996581 00:15:50.440 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.764 rmmod nvme_tcp 00:15:50.764 rmmod nvme_fabrics 00:15:50.764 rmmod nvme_keyring 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1993795 ']' 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1993795 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1993795 ']' 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1993795 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.764 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1993795 00:15:50.764 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.764 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.764 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1993795' 00:15:50.764 killing process with pid 1993795 00:15:50.764 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1993795 00:15:50.764 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1993795 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.027 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:52.938 00:15:52.938 real 0m28.303s 00:15:52.938 user 0m32.314s 00:15:52.938 sys 0m8.281s 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:52.938 ************************************ 00:15:52.938 END TEST nvmf_ns_masking 00:15:52.938 ************************************ 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.938 10:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.938 ************************************ 00:15:52.938 START TEST nvmf_nvme_cli 00:15:52.938 ************************************ 00:15:52.939 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.201 * Looking for test storage... 00:15:53.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.201 --rc genhtml_branch_coverage=1 00:15:53.201 --rc genhtml_function_coverage=1 00:15:53.201 --rc genhtml_legend=1 00:15:53.201 --rc geninfo_all_blocks=1 00:15:53.201 --rc geninfo_unexecuted_blocks=1 00:15:53.201 00:15:53.201 ' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.201 --rc genhtml_branch_coverage=1 00:15:53.201 --rc genhtml_function_coverage=1 00:15:53.201 --rc genhtml_legend=1 00:15:53.201 --rc geninfo_all_blocks=1 00:15:53.201 --rc geninfo_unexecuted_blocks=1 00:15:53.201 00:15:53.201 ' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.201 --rc genhtml_branch_coverage=1 00:15:53.201 --rc genhtml_function_coverage=1 00:15:53.201 --rc genhtml_legend=1 00:15:53.201 --rc geninfo_all_blocks=1 00:15:53.201 --rc geninfo_unexecuted_blocks=1 00:15:53.201 00:15:53.201 ' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.201 --rc genhtml_branch_coverage=1 00:15:53.201 --rc genhtml_function_coverage=1 00:15:53.201 --rc genhtml_legend=1 00:15:53.201 --rc geninfo_all_blocks=1 00:15:53.201 --rc geninfo_unexecuted_blocks=1 00:15:53.201 00:15:53.201 ' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:53.201 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.202 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:01.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:01.340 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.340 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:01.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:01.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.341 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:01.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:16:01.341 00:16:01.341 --- 10.0.0.2 ping statistics --- 00:16:01.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.341 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:16:01.341 00:16:01.341 --- 10.0.0.1 ping statistics --- 00:16:01.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.341 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2002286 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2002286 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2002286 ']' 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.341 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.341 [2024-11-20 10:33:33.138863] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:01.341 [2024-11-20 10:33:33.138929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.341 [2024-11-20 10:33:33.238101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.341 [2024-11-20 10:33:33.292836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.341 [2024-11-20 10:33:33.292899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.341 [2024-11-20 10:33:33.292908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.341 [2024-11-20 10:33:33.292915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.341 [2024-11-20 10:33:33.292922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.341 [2024-11-20 10:33:33.295316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.341 [2024-11-20 10:33:33.295476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.341 [2024-11-20 10:33:33.295615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.341 [2024-11-20 10:33:33.295615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.602 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.602 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:01.602 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.602 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.602 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 [2024-11-20 10:33:34.016210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 Malloc0 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 Malloc1 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 [2024-11-20 10:33:34.125947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.864 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:02.126 00:16:02.126 Discovery Log Number of Records 2, Generation counter 2 00:16:02.126 =====Discovery Log Entry 0====== 00:16:02.126 trtype: tcp 00:16:02.126 adrfam: ipv4 00:16:02.126 subtype: current discovery subsystem 00:16:02.126 treq: not required 00:16:02.126 portid: 0 00:16:02.126 trsvcid: 4420 00:16:02.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:02.126 traddr: 10.0.0.2 00:16:02.126 eflags: explicit discovery connections, duplicate discovery information 00:16:02.126 sectype: none 00:16:02.126 =====Discovery Log Entry 1====== 00:16:02.126 trtype: tcp 00:16:02.126 adrfam: ipv4 00:16:02.126 subtype: nvme subsystem 00:16:02.126 treq: not required 00:16:02.126 portid: 0 00:16:02.126 trsvcid: 4420 00:16:02.126 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:02.126 traddr: 10.0.0.2 00:16:02.126 eflags: none 00:16:02.126 sectype: none 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:02.126 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:04.038 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.949 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:05.949 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:05.949 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.949 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:05.949 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:05.950 /dev/nvme0n2 ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:05.950 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:06.211 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:06.471 rmmod nvme_tcp 00:16:06.471 rmmod nvme_fabrics 00:16:06.471 rmmod nvme_keyring 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2002286 ']' 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2002286 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2002286 ']' 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2002286 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2002286 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.471 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.472 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2002286' 00:16:06.472 killing process with pid 2002286 00:16:06.472 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2002286 00:16:06.472 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2002286 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.732 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:08.649 00:16:08.649 real 0m15.639s 00:16:08.649 user 0m24.387s 00:16:08.649 sys 0m6.479s 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 ************************************ 00:16:08.649 END TEST nvmf_nvme_cli 00:16:08.649 ************************************ 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.649 10:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.649 ************************************ 00:16:08.649 START TEST nvmf_vfio_user 00:16:08.649 ************************************ 00:16:08.649 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:08.910 * Looking for test storage... 00:16:08.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.910 --rc genhtml_branch_coverage=1 00:16:08.910 --rc genhtml_function_coverage=1 00:16:08.910 --rc genhtml_legend=1 00:16:08.910 --rc geninfo_all_blocks=1 00:16:08.910 --rc geninfo_unexecuted_blocks=1 00:16:08.910 00:16:08.910 ' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.910 --rc genhtml_branch_coverage=1 00:16:08.910 --rc genhtml_function_coverage=1 00:16:08.910 --rc genhtml_legend=1 00:16:08.910 --rc geninfo_all_blocks=1 00:16:08.910 --rc geninfo_unexecuted_blocks=1 00:16:08.910 00:16:08.910 ' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.910 --rc genhtml_branch_coverage=1 00:16:08.910 --rc genhtml_function_coverage=1 00:16:08.910 --rc genhtml_legend=1 00:16:08.910 --rc geninfo_all_blocks=1 00:16:08.910 --rc geninfo_unexecuted_blocks=1 00:16:08.910 00:16:08.910 ' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:08.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.910 --rc genhtml_branch_coverage=1 00:16:08.910 --rc genhtml_function_coverage=1 00:16:08.910 --rc genhtml_legend=1 00:16:08.910 --rc geninfo_all_blocks=1 00:16:08.910 --rc geninfo_unexecuted_blocks=1 00:16:08.910 00:16:08.910 ' 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.910 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2003832 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2003832' 00:16:08.911 Process pid: 2003832 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2003832 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2003832 ']' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.911 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:09.171 [2024-11-20 10:33:41.320291] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:09.172 [2024-11-20 10:33:41.320368] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.172 [2024-11-20 10:33:41.408618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.172 [2024-11-20 10:33:41.444116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.172 [2024-11-20 10:33:41.444149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.172 [2024-11-20 10:33:41.444154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.172 [2024-11-20 10:33:41.444165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.172 [2024-11-20 10:33:41.444169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.172 [2024-11-20 10:33:41.445507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.172 [2024-11-20 10:33:41.445660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.172 [2024-11-20 10:33:41.445772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.172 [2024-11-20 10:33:41.445775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.113 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.113 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:10.113 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:11.312 Malloc1 00:16:11.312 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:11.571 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:11.571 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:11.831 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.831 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:11.831 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:12.091 Malloc2 00:16:12.091 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:12.091 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:12.351 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:12.612 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:12.612 [2024-11-20 10:33:44.842048] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:12.612 [2024-11-20 10:33:44.842093] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004574 ] 00:16:12.612 [2024-11-20 10:33:44.881486] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:12.612 [2024-11-20 10:33:44.890463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.612 [2024-11-20 10:33:44.890481] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcc91d0f000 00:16:12.612 [2024-11-20 10:33:44.891464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.892465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.893470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.894471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.895482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.896488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.897504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.898497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.612 [2024-11-20 10:33:44.899500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.612 [2024-11-20 10:33:44.899508] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcc91d04000 00:16:12.612 [2024-11-20 10:33:44.900421] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.612 [2024-11-20 10:33:44.913877] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:12.612 [2024-11-20 10:33:44.913898] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:12.613 [2024-11-20 10:33:44.916598] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:12.613 [2024-11-20 10:33:44.916634] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:12.613 [2024-11-20 10:33:44.916694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:12.613 [2024-11-20 10:33:44.916709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:12.613 [2024-11-20 10:33:44.916713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:12.613 [2024-11-20 10:33:44.917595] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:12.613 [2024-11-20 10:33:44.917606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:12.613 [2024-11-20 10:33:44.917612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:12.613 [2024-11-20 10:33:44.918599] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:12.613 [2024-11-20 10:33:44.918607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:12.613 [2024-11-20 10:33:44.918613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.919611] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:12.613 [2024-11-20 10:33:44.919618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.920615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:12.613 [2024-11-20 10:33:44.920621] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:12.613 [2024-11-20 10:33:44.920624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.920629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.920735] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:12.613 [2024-11-20 10:33:44.920738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.920742] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:12.613 [2024-11-20 10:33:44.921622] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:12.613 [2024-11-20 10:33:44.922627] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:12.613 [2024-11-20 10:33:44.923632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:12.613 [2024-11-20 10:33:44.924628] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:12.613 [2024-11-20 10:33:44.924679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.613 [2024-11-20 10:33:44.925637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:12.613 [2024-11-20 10:33:44.925644] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.613 [2024-11-20 10:33:44.925648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:12.613 [2024-11-20 10:33:44.925672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925685] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.613 [2024-11-20 10:33:44.925688] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.613 [2024-11-20 10:33:44.925691] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.613 [2024-11-20 10:33:44.925703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.925742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.925750] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:12.613 [2024-11-20 10:33:44.925754] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:12.613 [2024-11-20 10:33:44.925757] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:12.613 [2024-11-20 10:33:44.925761] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:12.613 [2024-11-20 10:33:44.925766] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:12.613 [2024-11-20 10:33:44.925769] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:12.613 [2024-11-20 10:33:44.925773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.925798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.925806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.613 [2024-11-20 10:33:44.925812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.613 [2024-11-20 10:33:44.925818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.613 [2024-11-20 10:33:44.925824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.613 [2024-11-20 10:33:44.925827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.925848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.925853] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:12.613 [2024-11-20 10:33:44.925857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.925882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.925927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925939] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:12.613 [2024-11-20 10:33:44.925942] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:12.613 [2024-11-20 10:33:44.925945] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.613 [2024-11-20 10:33:44.925949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.925957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.925964] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:12.613 [2024-11-20 10:33:44.925971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.925982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.613 [2024-11-20 10:33:44.925985] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.613 [2024-11-20 10:33:44.925987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.613 [2024-11-20 10:33:44.925992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.613 [2024-11-20 10:33:44.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:12.613 [2024-11-20 10:33:44.926017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.926024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:12.613 [2024-11-20 10:33:44.926029] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.613 [2024-11-20 10:33:44.926032] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.613 [2024-11-20 10:33:44.926034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.614 [2024-11-20 10:33:44.926038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926082] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:12.614 [2024-11-20 10:33:44.926086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:12.614 [2024-11-20 10:33:44.926089] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:12.614 [2024-11-20 10:33:44.926104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:12.614 [2024-11-20 10:33:44.926178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:12.614 [2024-11-20 10:33:44.926180] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:12.614 [2024-11-20 10:33:44.926183] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:12.614 [2024-11-20 10:33:44.926185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:12.614 [2024-11-20 10:33:44.926190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:12.614 [2024-11-20 10:33:44.926195] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:12.614 [2024-11-20 10:33:44.926198] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:12.614 [2024-11-20 10:33:44.926201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.614 [2024-11-20 10:33:44.926205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:12.614 [2024-11-20 10:33:44.926215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.614 [2024-11-20 10:33:44.926217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.614 [2024-11-20 10:33:44.926222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:12.614 [2024-11-20 10:33:44.926231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:12.614 [2024-11-20 10:33:44.926233] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:12.614 [2024-11-20 10:33:44.926237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:12.614 [2024-11-20 10:33:44.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:12.614 [2024-11-20 10:33:44.926264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:12.614 ===================================================== 00:16:12.614 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:12.614 ===================================================== 00:16:12.614 Controller Capabilities/Features 00:16:12.614 ================================ 00:16:12.614 Vendor ID: 4e58 00:16:12.614 Subsystem Vendor ID: 4e58 00:16:12.614 Serial Number: SPDK1 00:16:12.614 Model Number: SPDK bdev Controller 00:16:12.614 Firmware Version: 25.01 00:16:12.614 Recommended Arb Burst: 6 00:16:12.614 IEEE OUI Identifier: 8d 6b 50 00:16:12.614 Multi-path I/O 00:16:12.614 May have multiple subsystem ports: Yes 00:16:12.614 May have multiple controllers: Yes 00:16:12.614 Associated with SR-IOV VF: No 00:16:12.614 Max Data Transfer Size: 131072 00:16:12.614 Max Number of Namespaces: 32 00:16:12.614 Max Number of I/O Queues: 127 00:16:12.614 NVMe Specification Version (VS): 1.3 00:16:12.614 NVMe Specification Version (Identify): 1.3 00:16:12.614 Maximum Queue Entries: 256 00:16:12.614 Contiguous Queues Required: Yes 00:16:12.614 Arbitration Mechanisms Supported 00:16:12.614 Weighted Round Robin: Not Supported 00:16:12.614 Vendor Specific: Not Supported 00:16:12.614 Reset Timeout: 15000 ms 00:16:12.614 Doorbell Stride: 4 bytes 00:16:12.614 NVM Subsystem Reset: Not Supported 00:16:12.614 Command Sets Supported 00:16:12.614 NVM Command Set: Supported 00:16:12.614 Boot Partition: Not Supported 00:16:12.614 Memory Page Size Minimum: 4096 bytes 00:16:12.614 Memory Page Size Maximum: 4096 bytes 00:16:12.614 Persistent Memory Region: Not Supported 00:16:12.614 Optional Asynchronous Events Supported 00:16:12.614 Namespace Attribute Notices: Supported 00:16:12.614 Firmware Activation Notices: Not Supported 00:16:12.614 ANA Change Notices: Not Supported 00:16:12.614 PLE Aggregate Log Change Notices: Not Supported 00:16:12.614 LBA Status Info Alert Notices: Not Supported 00:16:12.614 EGE Aggregate Log Change Notices: Not Supported 00:16:12.614 Normal NVM Subsystem Shutdown event: Not Supported 00:16:12.614 Zone Descriptor Change Notices: Not Supported 00:16:12.614 Discovery Log Change Notices: Not Supported 00:16:12.614 Controller Attributes 00:16:12.614 128-bit Host Identifier: Supported 00:16:12.614 Non-Operational Permissive Mode: Not Supported 00:16:12.614 NVM Sets: Not Supported 00:16:12.614 Read Recovery Levels: Not Supported 00:16:12.614 Endurance Groups: Not Supported 00:16:12.614 Predictable Latency Mode: Not Supported 00:16:12.614 Traffic Based Keep ALive: Not Supported 00:16:12.614 Namespace Granularity: Not Supported 00:16:12.614 SQ Associations: Not Supported 00:16:12.614 UUID List: Not Supported 00:16:12.614 Multi-Domain Subsystem: Not Supported 00:16:12.614 Fixed Capacity Management: Not Supported 00:16:12.614 Variable Capacity Management: Not Supported 00:16:12.614 Delete Endurance Group: Not Supported 00:16:12.614 Delete NVM Set: Not Supported 00:16:12.614 Extended LBA Formats Supported: Not Supported 00:16:12.614 Flexible Data Placement Supported: Not Supported 00:16:12.614 00:16:12.614 Controller Memory Buffer Support 00:16:12.614 ================================ 00:16:12.614 Supported: No 00:16:12.614 00:16:12.614 Persistent Memory Region Support 00:16:12.614 ================================ 00:16:12.614 Supported: No 00:16:12.614 00:16:12.614 Admin Command Set Attributes 00:16:12.614 ============================ 00:16:12.614 Security Send/Receive: Not Supported 00:16:12.614 Format NVM: Not Supported 00:16:12.614 Firmware Activate/Download: Not Supported 00:16:12.614 Namespace Management: Not Supported 00:16:12.614 Device Self-Test: Not Supported 00:16:12.614 Directives: Not Supported 00:16:12.614 NVMe-MI: Not Supported 00:16:12.614 Virtualization Management: Not Supported 00:16:12.614 Doorbell Buffer Config: Not Supported 00:16:12.614 Get LBA Status Capability: Not Supported 00:16:12.614 Command & Feature Lockdown Capability: Not Supported 00:16:12.614 Abort Command Limit: 4 00:16:12.614 Async Event Request Limit: 4 00:16:12.614 Number of Firmware Slots: N/A 00:16:12.614 Firmware Slot 1 Read-Only: N/A 00:16:12.614 Firmware Activation Without Reset: N/A 00:16:12.614 Multiple Update Detection Support: N/A 00:16:12.614 Firmware Update Granularity: No Information Provided 00:16:12.614 Per-Namespace SMART Log: No 00:16:12.615 Asymmetric Namespace Access Log Page: Not Supported 00:16:12.615 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:12.615 Command Effects Log Page: Supported 00:16:12.615 Get Log Page Extended Data: Supported 00:16:12.615 Telemetry Log Pages: Not Supported 00:16:12.615 Persistent Event Log Pages: Not Supported 00:16:12.615 Supported Log Pages Log Page: May Support 00:16:12.615 Commands Supported & Effects Log Page: Not Supported 00:16:12.615 Feature Identifiers & Effects Log Page:May Support 00:16:12.615 NVMe-MI Commands & Effects Log Page: May Support 00:16:12.615 Data Area 4 for Telemetry Log: Not Supported 00:16:12.615 Error Log Page Entries Supported: 128 00:16:12.615 Keep Alive: Supported 00:16:12.615 Keep Alive Granularity: 10000 ms 00:16:12.615 00:16:12.615 NVM Command Set Attributes 00:16:12.615 ========================== 00:16:12.615 Submission Queue Entry Size 00:16:12.615 Max: 64 00:16:12.615 Min: 64 00:16:12.615 Completion Queue Entry Size 00:16:12.615 Max: 16 00:16:12.615 Min: 16 00:16:12.615 Number of Namespaces: 32 00:16:12.615 Compare Command: Supported 00:16:12.615 Write Uncorrectable Command: Not Supported 00:16:12.615 Dataset Management Command: Supported 00:16:12.615 Write Zeroes Command: Supported 00:16:12.615 Set Features Save Field: Not Supported 00:16:12.615 Reservations: Not Supported 00:16:12.615 Timestamp: Not Supported 00:16:12.615 Copy: Supported 00:16:12.615 Volatile Write Cache: Present 00:16:12.615 Atomic Write Unit (Normal): 1 00:16:12.615 Atomic Write Unit (PFail): 1 00:16:12.615 Atomic Compare & Write Unit: 1 00:16:12.615 Fused Compare & Write: Supported 00:16:12.615 Scatter-Gather List 00:16:12.615 SGL Command Set: Supported (Dword aligned) 00:16:12.615 SGL Keyed: Not Supported 00:16:12.615 SGL Bit Bucket Descriptor: Not Supported 00:16:12.615 SGL Metadata Pointer: Not Supported 00:16:12.615 Oversized SGL: Not Supported 00:16:12.615 SGL Metadata Address: Not Supported 00:16:12.615 SGL Offset: Not Supported 00:16:12.615 Transport SGL Data Block: Not Supported 00:16:12.615 Replay Protected Memory Block: Not Supported 00:16:12.615 00:16:12.615 Firmware Slot Information 00:16:12.615 ========================= 00:16:12.615 Active slot: 1 00:16:12.615 Slot 1 Firmware Revision: 25.01 00:16:12.615 00:16:12.615 00:16:12.615 Commands Supported and Effects 00:16:12.615 ============================== 00:16:12.615 Admin Commands 00:16:12.615 -------------- 00:16:12.615 Get Log Page (02h): Supported 00:16:12.615 Identify (06h): Supported 00:16:12.615 Abort (08h): Supported 00:16:12.615 Set Features (09h): Supported 00:16:12.615 Get Features (0Ah): Supported 00:16:12.615 Asynchronous Event Request (0Ch): Supported 00:16:12.615 Keep Alive (18h): Supported 00:16:12.615 I/O Commands 00:16:12.615 ------------ 00:16:12.615 Flush (00h): Supported LBA-Change 00:16:12.615 Write (01h): Supported LBA-Change 00:16:12.615 Read (02h): Supported 00:16:12.615 Compare (05h): Supported 00:16:12.615 Write Zeroes (08h): Supported LBA-Change 00:16:12.615 Dataset Management (09h): Supported LBA-Change 00:16:12.615 Copy (19h): Supported LBA-Change 00:16:12.615 00:16:12.615 Error Log 00:16:12.615 ========= 00:16:12.615 00:16:12.615 Arbitration 00:16:12.615 =========== 00:16:12.615 Arbitration Burst: 1 00:16:12.615 00:16:12.615 Power Management 00:16:12.615 ================ 00:16:12.615 Number of Power States: 1 00:16:12.615 Current Power State: Power State #0 00:16:12.615 Power State #0: 00:16:12.615 Max Power: 0.00 W 00:16:12.615 Non-Operational State: Operational 00:16:12.615 Entry Latency: Not Reported 00:16:12.615 Exit Latency: Not Reported 00:16:12.615 Relative Read Throughput: 0 00:16:12.615 Relative Read Latency: 0 00:16:12.615 Relative Write Throughput: 0 00:16:12.615 Relative Write Latency: 0 00:16:12.615 Idle Power: Not Reported 00:16:12.615 Active Power: Not Reported 00:16:12.615 Non-Operational Permissive Mode: Not Supported 00:16:12.615 00:16:12.615 Health Information 00:16:12.615 ================== 00:16:12.615 Critical Warnings: 00:16:12.615 Available Spare Space: OK 00:16:12.615 Temperature: OK 00:16:12.615 Device Reliability: OK 00:16:12.615 Read Only: No 00:16:12.615 Volatile Memory Backup: OK 00:16:12.615 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:12.615 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:12.615 Available Spare: 0% 00:16:12.615 Available Sp[2024-11-20 10:33:44.926339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:12.615 [2024-11-20 10:33:44.926349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:12.615 [2024-11-20 10:33:44.926370] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:12.615 [2024-11-20 10:33:44.926378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.615 [2024-11-20 10:33:44.926383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.615 [2024-11-20 10:33:44.926387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.615 [2024-11-20 10:33:44.926392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.615 [2024-11-20 10:33:44.928165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:12.615 [2024-11-20 10:33:44.928174] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:12.615 [2024-11-20 10:33:44.928663] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:12.615 [2024-11-20 10:33:44.928703] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:12.615 [2024-11-20 10:33:44.928707] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:12.615 [2024-11-20 10:33:44.929665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:12.615 [2024-11-20 10:33:44.929674] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:12.615 [2024-11-20 10:33:44.929729] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:12.615 [2024-11-20 10:33:44.930685] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.615 are Threshold: 0% 00:16:12.615 Life Percentage Used: 0% 00:16:12.615 Data Units Read: 0 00:16:12.615 Data Units Written: 0 00:16:12.615 Host Read Commands: 0 00:16:12.615 Host Write Commands: 0 00:16:12.615 Controller Busy Time: 0 minutes 00:16:12.615 Power Cycles: 0 00:16:12.615 Power On Hours: 0 hours 00:16:12.615 Unsafe Shutdowns: 0 00:16:12.615 Unrecoverable Media Errors: 0 00:16:12.615 Lifetime Error Log Entries: 0 00:16:12.615 Warning Temperature Time: 0 minutes 00:16:12.615 Critical Temperature Time: 0 minutes 00:16:12.615 00:16:12.615 Number of Queues 00:16:12.615 ================ 00:16:12.615 Number of I/O Submission Queues: 127 00:16:12.615 Number of I/O Completion Queues: 127 00:16:12.615 00:16:12.615 Active Namespaces 00:16:12.615 ================= 00:16:12.615 Namespace ID:1 00:16:12.615 Error Recovery Timeout: Unlimited 00:16:12.615 Command Set Identifier: NVM (00h) 00:16:12.615 Deallocate: Supported 00:16:12.615 Deallocated/Unwritten Error: Not Supported 00:16:12.615 Deallocated Read Value: Unknown 00:16:12.615 Deallocate in Write Zeroes: Not Supported 00:16:12.615 Deallocated Guard Field: 0xFFFF 00:16:12.615 Flush: Supported 00:16:12.615 Reservation: Supported 00:16:12.615 Namespace Sharing Capabilities: Multiple Controllers 00:16:12.615 Size (in LBAs): 131072 (0GiB) 00:16:12.615 Capacity (in LBAs): 131072 (0GiB) 00:16:12.615 Utilization (in LBAs): 131072 (0GiB) 00:16:12.615 NGUID: 2D84C8CFDF4549F49AA7BA0C593764FB 00:16:12.615 UUID: 2d84c8cf-df45-49f4-9aa7-ba0c593764fb 00:16:12.615 Thin Provisioning: Not Supported 00:16:12.615 Per-NS Atomic Units: Yes 00:16:12.615 Atomic Boundary Size (Normal): 0 00:16:12.615 Atomic Boundary Size (PFail): 0 00:16:12.615 Atomic Boundary Offset: 0 00:16:12.615 Maximum Single Source Range Length: 65535 00:16:12.615 Maximum Copy Length: 65535 00:16:12.615 Maximum Source Range Count: 1 00:16:12.615 NGUID/EUI64 Never Reused: No 00:16:12.615 Namespace Write Protected: No 00:16:12.615 Number of LBA Formats: 1 00:16:12.615 Current LBA Format: LBA Format #00 00:16:12.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:12.615 00:16:12.615 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:12.875 [2024-11-20 10:33:45.119844] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:18.155 Initializing NVMe Controllers 00:16:18.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:18.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:18.155 Initialization complete. Launching workers. 00:16:18.155 ======================================================== 00:16:18.155 Latency(us) 00:16:18.155 Device Information : IOPS MiB/s Average min max 00:16:18.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39969.63 156.13 3202.30 847.98 6945.40 00:16:18.155 ======================================================== 00:16:18.155 Total : 39969.63 156.13 3202.30 847.98 6945.40 00:16:18.155 00:16:18.155 [2024-11-20 10:33:50.140311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:18.155 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:18.155 [2024-11-20 10:33:50.329142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:23.436 Initializing NVMe Controllers 00:16:23.436 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:23.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:23.436 Initialization complete. Launching workers. 00:16:23.436 ======================================================== 00:16:23.436 Latency(us) 00:16:23.436 Device Information : IOPS MiB/s Average min max 00:16:23.436 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7985.27 4985.58 10978.36 00:16:23.436 ======================================================== 00:16:23.436 Total : 16051.20 62.70 7985.27 4985.58 10978.36 00:16:23.436 00:16:23.436 [2024-11-20 10:33:55.365591] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:23.436 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:23.436 [2024-11-20 10:33:55.564433] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.711 [2024-11-20 10:34:00.624350] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.711 Initializing NVMe Controllers 00:16:28.711 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:28.711 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:28.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:28.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:28.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:28.711 Initialization complete. Launching workers. 00:16:28.711 Starting thread on core 2 00:16:28.711 Starting thread on core 3 00:16:28.711 Starting thread on core 1 00:16:28.711 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:28.711 [2024-11-20 10:34:00.873507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.004 [2024-11-20 10:34:03.937129] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.004 Initializing NVMe Controllers 00:16:32.004 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.004 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.004 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:32.004 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:32.004 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:32.004 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:32.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:32.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:32.004 Initialization complete. Launching workers. 00:16:32.004 Starting thread on core 1 with urgent priority queue 00:16:32.004 Starting thread on core 2 with urgent priority queue 00:16:32.004 Starting thread on core 3 with urgent priority queue 00:16:32.004 Starting thread on core 0 with urgent priority queue 00:16:32.004 SPDK bdev Controller (SPDK1 ) core 0: 12610.67 IO/s 7.93 secs/100000 ios 00:16:32.004 SPDK bdev Controller (SPDK1 ) core 1: 9765.33 IO/s 10.24 secs/100000 ios 00:16:32.004 SPDK bdev Controller (SPDK1 ) core 2: 14296.33 IO/s 6.99 secs/100000 ios 00:16:32.004 SPDK bdev Controller (SPDK1 ) core 3: 8682.00 IO/s 11.52 secs/100000 ios 00:16:32.004 ======================================================== 00:16:32.004 00:16:32.004 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.004 [2024-11-20 10:34:04.180596] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.004 Initializing NVMe Controllers 00:16:32.004 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.004 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.004 Namespace ID: 1 size: 0GB 00:16:32.004 Initialization complete. 00:16:32.004 INFO: using host memory buffer for IO 00:16:32.004 Hello world! 00:16:32.004 [2024-11-20 10:34:04.216819] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.004 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.264 [2024-11-20 10:34:04.449617] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.202 Initializing NVMe Controllers 00:16:33.202 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.202 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.202 Initialization complete. Launching workers. 00:16:33.202 submit (in ns) avg, min, max = 5343.6, 2820.0, 3998018.3 00:16:33.202 complete (in ns) avg, min, max = 17782.4, 1643.3, 3998525.0 00:16:33.202 00:16:33.202 Submit histogram 00:16:33.202 ================ 00:16:33.202 Range in us Cumulative Count 00:16:33.202 2.813 - 2.827: 0.2734% ( 55) 00:16:33.202 2.827 - 2.840: 1.7644% ( 300) 00:16:33.202 2.840 - 2.853: 4.1402% ( 478) 00:16:33.202 2.853 - 2.867: 8.7873% ( 935) 00:16:33.202 2.867 - 2.880: 13.6133% ( 971) 00:16:33.202 2.880 - 2.893: 19.2396% ( 1132) 00:16:33.202 2.893 - 2.907: 25.7356% ( 1307) 00:16:33.202 2.907 - 2.920: 31.6650% ( 1193) 00:16:33.202 2.920 - 2.933: 37.6640% ( 1207) 00:16:33.202 2.933 - 2.947: 42.9076% ( 1055) 00:16:33.202 2.947 - 2.960: 48.2356% ( 1072) 00:16:33.202 2.960 - 2.973: 54.1402% ( 1188) 00:16:33.202 2.973 - 2.987: 62.3310% ( 1648) 00:16:33.202 2.987 - 3.000: 71.6551% ( 1876) 00:16:33.202 3.000 - 3.013: 80.4920% ( 1778) 00:16:33.202 3.013 - 3.027: 87.8976% ( 1490) 00:16:33.202 3.027 - 3.040: 92.8628% ( 999) 00:16:33.202 3.040 - 3.053: 96.1282% ( 657) 00:16:33.202 3.053 - 3.067: 97.9274% ( 362) 00:16:33.202 3.067 - 3.080: 98.9215% ( 200) 00:16:33.202 3.080 - 3.093: 99.3738% ( 91) 00:16:33.202 3.093 - 3.107: 99.5328% ( 32) 00:16:33.202 3.107 - 3.120: 99.6074% ( 15) 00:16:33.202 3.120 - 3.133: 99.6421% ( 7) 00:16:33.202 3.133 - 3.147: 99.6670% ( 5) 00:16:33.202 3.160 - 3.173: 99.6769% ( 2) 00:16:33.202 3.173 - 3.187: 99.6869% ( 2) 00:16:33.202 3.187 - 3.200: 99.6968% ( 2) 00:16:33.202 3.400 - 3.413: 99.7018% ( 1) 00:16:33.202 3.680 - 3.707: 99.7068% ( 1) 00:16:33.203 3.813 - 3.840: 99.7117% ( 1) 00:16:33.203 4.480 - 4.507: 99.7167% ( 1) 00:16:33.203 4.507 - 4.533: 99.7217% ( 1) 00:16:33.203 4.533 - 4.560: 99.7316% ( 2) 00:16:33.203 4.560 - 4.587: 99.7366% ( 1) 00:16:33.203 4.613 - 4.640: 99.7416% ( 1) 00:16:33.203 4.720 - 4.747: 99.7515% ( 2) 00:16:33.203 4.773 - 4.800: 99.7614% ( 2) 00:16:33.203 4.800 - 4.827: 99.7763% ( 3) 00:16:33.203 4.827 - 4.853: 99.7813% ( 1) 00:16:33.203 4.907 - 4.933: 99.7863% ( 1) 00:16:33.203 4.933 - 4.960: 99.7913% ( 1) 00:16:33.203 4.960 - 4.987: 99.8012% ( 2) 00:16:33.203 5.013 - 5.040: 99.8111% ( 2) 00:16:33.203 5.040 - 5.067: 99.8161% ( 1) 00:16:33.203 5.067 - 5.093: 99.8211% ( 1) 00:16:33.203 5.093 - 5.120: 99.8360% ( 3) 00:16:33.203 5.147 - 5.173: 99.8410% ( 1) 00:16:33.203 5.173 - 5.200: 99.8459% ( 1) 00:16:33.203 5.200 - 5.227: 99.8559% ( 2) 00:16:33.203 5.280 - 5.307: 99.8757% ( 4) 00:16:33.203 5.307 - 5.333: 99.8807% ( 1) 00:16:33.203 5.360 - 5.387: 99.8857% ( 1) 00:16:33.203 5.413 - 5.440: 99.8907% ( 1) 00:16:33.203 5.733 - 5.760: 99.8956% ( 1) 00:16:33.203 5.867 - 5.893: 99.9006% ( 1) 00:16:33.203 6.133 - 6.160: 99.9105% ( 2) 00:16:33.203 6.160 - 6.187: 99.9155% ( 1) 00:16:33.203 6.427 - 6.453: 99.9205% ( 1) 00:16:33.203 6.587 - 6.613: 99.9304% ( 2) 00:16:33.203 10.507 - 10.560: 99.9354% ( 1) 00:16:33.203 11.360 - 11.413: 99.9404% ( 1) 00:16:33.203 3986.773 - 4014.080: 100.0000% ( 12) 00:16:33.203 00:16:33.203 Complete histogram 00:16:33.203 ================== 00:16:33.203 Range in us Cumulative Count 00:16:33.203 1.640 - 1.647: 0.0050% ( 1) 00:16:33.203 1.647 - 1.653: 0.0845% ( 16) 00:16:33.203 1.653 - 1.660: 0.7555% ( 135) 00:16:33.203 1.660 - 1.667: 0.8101% ( 11) 00:16:33.203 1.667 - 1.673: 0.8648% ( 11) 00:16:33.203 1.673 - 1.680: 0.9642% ( 20) 00:16:33.203 1.680 - 1.687: 0.9791% ( 3) 00:16:33.203 1.693 - 1.700: 4.4632% ( 701) 00:16:33.203 1.700 - 1.707: 54.7416% ( 10116) 00:16:33.203 1.707 - 1.720: 69.7068% ( 3011) 00:16:33.203 1.720 - 1.733: 81.9334% ( 2460) 00:16:33.203 1.733 - 1.747: 86.6352% ( 946) 00:16:33.203 1.747 - 1.760: 87.7336% ( 221) 00:16:33.203 1.760 - 1.773: 92.5249% ( 964) 00:16:33.203 1.773 - [2024-11-20 10:34:05.468238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.203 1.787: 96.8091% ( 862) 00:16:33.203 1.787 - 1.800: 98.5586% ( 352) 00:16:33.203 1.800 - 1.813: 99.2495% ( 139) 00:16:33.203 1.813 - 1.827: 99.4085% ( 32) 00:16:33.203 1.827 - 1.840: 99.4185% ( 2) 00:16:33.203 1.880 - 1.893: 99.4284% ( 2) 00:16:33.203 1.907 - 1.920: 99.4334% ( 1) 00:16:33.203 3.240 - 3.253: 99.4433% ( 2) 00:16:33.203 3.253 - 3.267: 99.4483% ( 1) 00:16:33.203 3.267 - 3.280: 99.4533% ( 1) 00:16:33.203 3.307 - 3.320: 99.4583% ( 1) 00:16:33.203 3.400 - 3.413: 99.4632% ( 1) 00:16:33.203 3.413 - 3.440: 99.4732% ( 2) 00:16:33.203 3.440 - 3.467: 99.4831% ( 2) 00:16:33.203 3.520 - 3.547: 99.4930% ( 2) 00:16:33.203 3.547 - 3.573: 99.4980% ( 1) 00:16:33.203 3.600 - 3.627: 99.5030% ( 1) 00:16:33.203 3.627 - 3.653: 99.5080% ( 1) 00:16:33.203 3.707 - 3.733: 99.5229% ( 3) 00:16:33.203 3.787 - 3.813: 99.5328% ( 2) 00:16:33.203 3.813 - 3.840: 99.5378% ( 1) 00:16:33.203 3.840 - 3.867: 99.5427% ( 1) 00:16:33.203 3.920 - 3.947: 99.5477% ( 1) 00:16:33.203 4.000 - 4.027: 99.5527% ( 1) 00:16:33.203 4.987 - 5.013: 99.5577% ( 1) 00:16:33.203 5.067 - 5.093: 99.5626% ( 1) 00:16:33.203 5.280 - 5.307: 99.5676% ( 1) 00:16:33.203 5.680 - 5.707: 99.5726% ( 1) 00:16:33.203 5.787 - 5.813: 99.5775% ( 1) 00:16:33.203 7.520 - 7.573: 99.5825% ( 1) 00:16:33.203 7.573 - 7.627: 99.5875% ( 1) 00:16:33.203 10.187 - 10.240: 99.5924% ( 1) 00:16:33.203 46.293 - 46.507: 99.5974% ( 1) 00:16:33.203 3604.480 - 3631.787: 99.6024% ( 1) 00:16:33.203 3986.773 - 4014.080: 100.0000% ( 80) 00:16:33.203 00:16:33.203 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:33.203 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:33.203 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:33.203 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:33.203 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.462 [ 00:16:33.462 { 00:16:33.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.462 "subtype": "Discovery", 00:16:33.462 "listen_addresses": [], 00:16:33.462 "allow_any_host": true, 00:16:33.462 "hosts": [] 00:16:33.462 }, 00:16:33.462 { 00:16:33.462 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.462 "subtype": "NVMe", 00:16:33.462 "listen_addresses": [ 00:16:33.462 { 00:16:33.462 "trtype": "VFIOUSER", 00:16:33.462 "adrfam": "IPv4", 00:16:33.462 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.462 "trsvcid": "0" 00:16:33.462 } 00:16:33.462 ], 00:16:33.462 "allow_any_host": true, 00:16:33.462 "hosts": [], 00:16:33.462 "serial_number": "SPDK1", 00:16:33.462 "model_number": "SPDK bdev Controller", 00:16:33.462 "max_namespaces": 32, 00:16:33.462 "min_cntlid": 1, 00:16:33.462 "max_cntlid": 65519, 00:16:33.462 "namespaces": [ 00:16:33.462 { 00:16:33.462 "nsid": 1, 00:16:33.462 "bdev_name": "Malloc1", 00:16:33.462 "name": "Malloc1", 00:16:33.462 "nguid": "2D84C8CFDF4549F49AA7BA0C593764FB", 00:16:33.462 "uuid": "2d84c8cf-df45-49f4-9aa7-ba0c593764fb" 00:16:33.462 } 00:16:33.462 ] 00:16:33.462 }, 00:16:33.462 { 00:16:33.462 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.462 "subtype": "NVMe", 00:16:33.462 "listen_addresses": [ 00:16:33.462 { 00:16:33.462 "trtype": "VFIOUSER", 00:16:33.462 "adrfam": "IPv4", 00:16:33.462 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.462 "trsvcid": "0" 00:16:33.462 } 00:16:33.462 ], 00:16:33.462 "allow_any_host": true, 00:16:33.462 "hosts": [], 00:16:33.462 "serial_number": "SPDK2", 00:16:33.462 "model_number": "SPDK bdev Controller", 00:16:33.462 "max_namespaces": 32, 00:16:33.462 "min_cntlid": 1, 00:16:33.462 "max_cntlid": 65519, 00:16:33.462 "namespaces": [ 00:16:33.462 { 00:16:33.462 "nsid": 1, 00:16:33.462 "bdev_name": "Malloc2", 00:16:33.462 "name": "Malloc2", 00:16:33.462 "nguid": "AEADAF84A8DE44E78FB8ACDA6FB7BEB2", 00:16:33.462 "uuid": "aeadaf84-a8de-44e7-8fb8-acda6fb7beb2" 00:16:33.462 } 00:16:33.462 ] 00:16:33.462 } 00:16:33.462 ] 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2008696 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:33.462 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:33.722 [2024-11-20 10:34:05.848520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.722 Malloc3 00:16:33.722 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:33.722 [2024-11-20 10:34:06.042900] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.722 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.722 Asynchronous Event Request test 00:16:33.722 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.722 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:33.722 Registering asynchronous event callbacks... 00:16:33.722 Starting namespace attribute notice tests for all controllers... 00:16:33.722 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:33.722 aer_cb - Changed Namespace 00:16:33.722 Cleaning up... 00:16:33.981 [ 00:16:33.981 { 00:16:33.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.981 "subtype": "Discovery", 00:16:33.981 "listen_addresses": [], 00:16:33.981 "allow_any_host": true, 00:16:33.981 "hosts": [] 00:16:33.981 }, 00:16:33.981 { 00:16:33.981 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.981 "subtype": "NVMe", 00:16:33.981 "listen_addresses": [ 00:16:33.981 { 00:16:33.981 "trtype": "VFIOUSER", 00:16:33.981 "adrfam": "IPv4", 00:16:33.981 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.981 "trsvcid": "0" 00:16:33.981 } 00:16:33.981 ], 00:16:33.981 "allow_any_host": true, 00:16:33.981 "hosts": [], 00:16:33.981 "serial_number": "SPDK1", 00:16:33.981 "model_number": "SPDK bdev Controller", 00:16:33.981 "max_namespaces": 32, 00:16:33.981 "min_cntlid": 1, 00:16:33.981 "max_cntlid": 65519, 00:16:33.981 "namespaces": [ 00:16:33.981 { 00:16:33.981 "nsid": 1, 00:16:33.981 "bdev_name": "Malloc1", 00:16:33.981 "name": "Malloc1", 00:16:33.981 "nguid": "2D84C8CFDF4549F49AA7BA0C593764FB", 00:16:33.981 "uuid": "2d84c8cf-df45-49f4-9aa7-ba0c593764fb" 00:16:33.981 }, 00:16:33.981 { 00:16:33.981 "nsid": 2, 00:16:33.981 "bdev_name": "Malloc3", 00:16:33.981 "name": "Malloc3", 00:16:33.981 "nguid": "2B69EFEADAA548F8AB772E7DC2B29496", 00:16:33.981 "uuid": "2b69efea-daa5-48f8-ab77-2e7dc2b29496" 00:16:33.981 } 00:16:33.981 ] 00:16:33.981 }, 00:16:33.981 { 00:16:33.981 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.981 "subtype": "NVMe", 00:16:33.981 "listen_addresses": [ 00:16:33.981 { 00:16:33.981 "trtype": "VFIOUSER", 00:16:33.981 "adrfam": "IPv4", 00:16:33.981 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.981 "trsvcid": "0" 00:16:33.981 } 00:16:33.981 ], 00:16:33.981 "allow_any_host": true, 00:16:33.981 "hosts": [], 00:16:33.981 "serial_number": "SPDK2", 00:16:33.981 "model_number": "SPDK bdev Controller", 00:16:33.982 "max_namespaces": 32, 00:16:33.982 "min_cntlid": 1, 00:16:33.982 "max_cntlid": 65519, 00:16:33.982 "namespaces": [ 00:16:33.982 { 00:16:33.982 "nsid": 1, 00:16:33.982 "bdev_name": "Malloc2", 00:16:33.982 "name": "Malloc2", 00:16:33.982 "nguid": "AEADAF84A8DE44E78FB8ACDA6FB7BEB2", 00:16:33.982 "uuid": "aeadaf84-a8de-44e7-8fb8-acda6fb7beb2" 00:16:33.982 } 00:16:33.982 ] 00:16:33.982 } 00:16:33.982 ] 00:16:33.982 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2008696 00:16:33.982 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.982 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:33.982 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:33.982 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:33.982 [2024-11-20 10:34:06.281339] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:33.982 [2024-11-20 10:34:06.281404] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008852 ] 00:16:33.982 [2024-11-20 10:34:06.326481] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:33.982 [2024-11-20 10:34:06.333333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.982 [2024-11-20 10:34:06.333353] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d5069f000 00:16:33.982 [2024-11-20 10:34:06.334339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.335345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.336351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.337359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.338368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.339373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.340382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.341384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.982 [2024-11-20 10:34:06.342395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.982 [2024-11-20 10:34:06.342406] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d50694000 00:16:33.982 [2024-11-20 10:34:06.343318] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:34.243 [2024-11-20 10:34:06.355433] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:34.243 [2024-11-20 10:34:06.355452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:34.243 [2024-11-20 10:34:06.360526] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:34.243 [2024-11-20 10:34:06.360559] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:34.243 [2024-11-20 10:34:06.360618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:34.243 [2024-11-20 10:34:06.360628] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:34.243 [2024-11-20 10:34:06.360631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:34.243 [2024-11-20 10:34:06.361528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:34.243 [2024-11-20 10:34:06.361536] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:34.243 [2024-11-20 10:34:06.361541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:34.243 [2024-11-20 10:34:06.362535] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:34.243 [2024-11-20 10:34:06.362541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:34.243 [2024-11-20 10:34:06.362547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:34.243 [2024-11-20 10:34:06.363541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:34.243 [2024-11-20 10:34:06.363547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:34.244 [2024-11-20 10:34:06.364546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:34.244 [2024-11-20 10:34:06.364553] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:34.244 [2024-11-20 10:34:06.364557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:34.244 [2024-11-20 10:34:06.364561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:34.244 [2024-11-20 10:34:06.364668] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:34.244 [2024-11-20 10:34:06.364671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:34.244 [2024-11-20 10:34:06.364675] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:34.244 [2024-11-20 10:34:06.365552] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:34.244 [2024-11-20 10:34:06.366557] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:34.244 [2024-11-20 10:34:06.367559] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:34.244 [2024-11-20 10:34:06.368561] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.244 [2024-11-20 10:34:06.368591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:34.244 [2024-11-20 10:34:06.369572] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:34.244 [2024-11-20 10:34:06.369578] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:34.244 [2024-11-20 10:34:06.369582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.369597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:34.244 [2024-11-20 10:34:06.369606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.369615] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:34.244 [2024-11-20 10:34:06.369619] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:34.244 [2024-11-20 10:34:06.369622] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.244 [2024-11-20 10:34:06.369631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.376173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:34.244 [2024-11-20 10:34:06.376176] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:34.244 [2024-11-20 10:34:06.376179] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:34.244 [2024-11-20 10:34:06.376183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:34.244 [2024-11-20 10:34:06.376188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:34.244 [2024-11-20 10:34:06.376191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:34.244 [2024-11-20 10:34:06.376195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.376202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.376210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.384162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.384171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.244 [2024-11-20 10:34:06.384179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.244 [2024-11-20 10:34:06.384185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.244 [2024-11-20 10:34:06.384191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.244 [2024-11-20 10:34:06.384195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.384200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.384206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.392170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.392178] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:34.244 [2024-11-20 10:34:06.392182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.392187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.392191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.392198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.400162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.400208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.400214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.400219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:34.244 [2024-11-20 10:34:06.400223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:34.244 [2024-11-20 10:34:06.400225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.244 [2024-11-20 10:34:06.400230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.408162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.408170] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:34.244 [2024-11-20 10:34:06.408178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.408183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.408189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:34.244 [2024-11-20 10:34:06.408192] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:34.244 [2024-11-20 10:34:06.408194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.244 [2024-11-20 10:34:06.408201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.416163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.416173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.416179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.416184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:34.244 [2024-11-20 10:34:06.416187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:34.244 [2024-11-20 10:34:06.416189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.244 [2024-11-20 10:34:06.416194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:34.244 [2024-11-20 10:34:06.424161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:34.244 [2024-11-20 10:34:06.424168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:34.244 [2024-11-20 10:34:06.424195] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:34.244 [2024-11-20 10:34:06.424198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:34.245 [2024-11-20 10:34:06.424201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:34.245 [2024-11-20 10:34:06.424214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.432163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.432173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.440163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.440172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.448162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.448171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.456162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.456173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:34.245 [2024-11-20 10:34:06.456177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:34.245 [2024-11-20 10:34:06.456179] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:34.245 [2024-11-20 10:34:06.456182] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:34.245 [2024-11-20 10:34:06.456184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:34.245 [2024-11-20 10:34:06.456189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:34.245 [2024-11-20 10:34:06.456195] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:34.245 [2024-11-20 10:34:06.456198] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:34.245 [2024-11-20 10:34:06.456200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.245 [2024-11-20 10:34:06.456205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.456210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:34.245 [2024-11-20 10:34:06.456213] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:34.245 [2024-11-20 10:34:06.456215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.245 [2024-11-20 10:34:06.456220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.456226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:34.245 [2024-11-20 10:34:06.456229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:34.245 [2024-11-20 10:34:06.456231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:34.245 [2024-11-20 10:34:06.456235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:34.245 [2024-11-20 10:34:06.464163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.464174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.464181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:34.245 [2024-11-20 10:34:06.464186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:34.245 ===================================================== 00:16:34.245 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:34.245 ===================================================== 00:16:34.245 Controller Capabilities/Features 00:16:34.245 ================================ 00:16:34.245 Vendor ID: 4e58 00:16:34.245 Subsystem Vendor ID: 4e58 00:16:34.245 Serial Number: SPDK2 00:16:34.245 Model Number: SPDK bdev Controller 00:16:34.245 Firmware Version: 25.01 00:16:34.245 Recommended Arb Burst: 6 00:16:34.245 IEEE OUI Identifier: 8d 6b 50 00:16:34.245 Multi-path I/O 00:16:34.245 May have multiple subsystem ports: Yes 00:16:34.245 May have multiple controllers: Yes 00:16:34.245 Associated with SR-IOV VF: No 00:16:34.245 Max Data Transfer Size: 131072 00:16:34.245 Max Number of Namespaces: 32 00:16:34.245 Max Number of I/O Queues: 127 00:16:34.245 NVMe Specification Version (VS): 1.3 00:16:34.245 NVMe Specification Version (Identify): 1.3 00:16:34.245 Maximum Queue Entries: 256 00:16:34.245 Contiguous Queues Required: Yes 00:16:34.245 Arbitration Mechanisms Supported 00:16:34.245 Weighted Round Robin: Not Supported 00:16:34.245 Vendor Specific: Not Supported 00:16:34.245 Reset Timeout: 15000 ms 00:16:34.245 Doorbell Stride: 4 bytes 00:16:34.245 NVM Subsystem Reset: Not Supported 00:16:34.245 Command Sets Supported 00:16:34.245 NVM Command Set: Supported 00:16:34.245 Boot Partition: Not Supported 00:16:34.245 Memory Page Size Minimum: 4096 bytes 00:16:34.245 Memory Page Size Maximum: 4096 bytes 00:16:34.245 Persistent Memory Region: Not Supported 00:16:34.245 Optional Asynchronous Events Supported 00:16:34.245 Namespace Attribute Notices: Supported 00:16:34.245 Firmware Activation Notices: Not Supported 00:16:34.245 ANA Change Notices: Not Supported 00:16:34.245 PLE Aggregate Log Change Notices: Not Supported 00:16:34.245 LBA Status Info Alert Notices: Not Supported 00:16:34.245 EGE Aggregate Log Change Notices: Not Supported 00:16:34.245 Normal NVM Subsystem Shutdown event: Not Supported 00:16:34.245 Zone Descriptor Change Notices: Not Supported 00:16:34.245 Discovery Log Change Notices: Not Supported 00:16:34.245 Controller Attributes 00:16:34.245 128-bit Host Identifier: Supported 00:16:34.245 Non-Operational Permissive Mode: Not Supported 00:16:34.245 NVM Sets: Not Supported 00:16:34.245 Read Recovery Levels: Not Supported 00:16:34.245 Endurance Groups: Not Supported 00:16:34.245 Predictable Latency Mode: Not Supported 00:16:34.245 Traffic Based Keep ALive: Not Supported 00:16:34.245 Namespace Granularity: Not Supported 00:16:34.245 SQ Associations: Not Supported 00:16:34.245 UUID List: Not Supported 00:16:34.245 Multi-Domain Subsystem: Not Supported 00:16:34.245 Fixed Capacity Management: Not Supported 00:16:34.245 Variable Capacity Management: Not Supported 00:16:34.245 Delete Endurance Group: Not Supported 00:16:34.245 Delete NVM Set: Not Supported 00:16:34.245 Extended LBA Formats Supported: Not Supported 00:16:34.245 Flexible Data Placement Supported: Not Supported 00:16:34.245 00:16:34.245 Controller Memory Buffer Support 00:16:34.245 ================================ 00:16:34.245 Supported: No 00:16:34.245 00:16:34.245 Persistent Memory Region Support 00:16:34.245 ================================ 00:16:34.245 Supported: No 00:16:34.245 00:16:34.245 Admin Command Set Attributes 00:16:34.245 ============================ 00:16:34.245 Security Send/Receive: Not Supported 00:16:34.245 Format NVM: Not Supported 00:16:34.245 Firmware Activate/Download: Not Supported 00:16:34.245 Namespace Management: Not Supported 00:16:34.245 Device Self-Test: Not Supported 00:16:34.245 Directives: Not Supported 00:16:34.245 NVMe-MI: Not Supported 00:16:34.245 Virtualization Management: Not Supported 00:16:34.245 Doorbell Buffer Config: Not Supported 00:16:34.245 Get LBA Status Capability: Not Supported 00:16:34.245 Command & Feature Lockdown Capability: Not Supported 00:16:34.245 Abort Command Limit: 4 00:16:34.245 Async Event Request Limit: 4 00:16:34.245 Number of Firmware Slots: N/A 00:16:34.245 Firmware Slot 1 Read-Only: N/A 00:16:34.245 Firmware Activation Without Reset: N/A 00:16:34.245 Multiple Update Detection Support: N/A 00:16:34.245 Firmware Update Granularity: No Information Provided 00:16:34.245 Per-Namespace SMART Log: No 00:16:34.245 Asymmetric Namespace Access Log Page: Not Supported 00:16:34.245 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:34.245 Command Effects Log Page: Supported 00:16:34.245 Get Log Page Extended Data: Supported 00:16:34.245 Telemetry Log Pages: Not Supported 00:16:34.245 Persistent Event Log Pages: Not Supported 00:16:34.245 Supported Log Pages Log Page: May Support 00:16:34.245 Commands Supported & Effects Log Page: Not Supported 00:16:34.245 Feature Identifiers & Effects Log Page:May Support 00:16:34.245 NVMe-MI Commands & Effects Log Page: May Support 00:16:34.245 Data Area 4 for Telemetry Log: Not Supported 00:16:34.245 Error Log Page Entries Supported: 128 00:16:34.245 Keep Alive: Supported 00:16:34.245 Keep Alive Granularity: 10000 ms 00:16:34.245 00:16:34.245 NVM Command Set Attributes 00:16:34.245 ========================== 00:16:34.245 Submission Queue Entry Size 00:16:34.245 Max: 64 00:16:34.245 Min: 64 00:16:34.245 Completion Queue Entry Size 00:16:34.245 Max: 16 00:16:34.245 Min: 16 00:16:34.245 Number of Namespaces: 32 00:16:34.245 Compare Command: Supported 00:16:34.245 Write Uncorrectable Command: Not Supported 00:16:34.245 Dataset Management Command: Supported 00:16:34.246 Write Zeroes Command: Supported 00:16:34.246 Set Features Save Field: Not Supported 00:16:34.246 Reservations: Not Supported 00:16:34.246 Timestamp: Not Supported 00:16:34.246 Copy: Supported 00:16:34.246 Volatile Write Cache: Present 00:16:34.246 Atomic Write Unit (Normal): 1 00:16:34.246 Atomic Write Unit (PFail): 1 00:16:34.246 Atomic Compare & Write Unit: 1 00:16:34.246 Fused Compare & Write: Supported 00:16:34.246 Scatter-Gather List 00:16:34.246 SGL Command Set: Supported (Dword aligned) 00:16:34.246 SGL Keyed: Not Supported 00:16:34.246 SGL Bit Bucket Descriptor: Not Supported 00:16:34.246 SGL Metadata Pointer: Not Supported 00:16:34.246 Oversized SGL: Not Supported 00:16:34.246 SGL Metadata Address: Not Supported 00:16:34.246 SGL Offset: Not Supported 00:16:34.246 Transport SGL Data Block: Not Supported 00:16:34.246 Replay Protected Memory Block: Not Supported 00:16:34.246 00:16:34.246 Firmware Slot Information 00:16:34.246 ========================= 00:16:34.246 Active slot: 1 00:16:34.246 Slot 1 Firmware Revision: 25.01 00:16:34.246 00:16:34.246 00:16:34.246 Commands Supported and Effects 00:16:34.246 ============================== 00:16:34.246 Admin Commands 00:16:34.246 -------------- 00:16:34.246 Get Log Page (02h): Supported 00:16:34.246 Identify (06h): Supported 00:16:34.246 Abort (08h): Supported 00:16:34.246 Set Features (09h): Supported 00:16:34.246 Get Features (0Ah): Supported 00:16:34.246 Asynchronous Event Request (0Ch): Supported 00:16:34.246 Keep Alive (18h): Supported 00:16:34.246 I/O Commands 00:16:34.246 ------------ 00:16:34.246 Flush (00h): Supported LBA-Change 00:16:34.246 Write (01h): Supported LBA-Change 00:16:34.246 Read (02h): Supported 00:16:34.246 Compare (05h): Supported 00:16:34.246 Write Zeroes (08h): Supported LBA-Change 00:16:34.246 Dataset Management (09h): Supported LBA-Change 00:16:34.246 Copy (19h): Supported LBA-Change 00:16:34.246 00:16:34.246 Error Log 00:16:34.246 ========= 00:16:34.246 00:16:34.246 Arbitration 00:16:34.246 =========== 00:16:34.246 Arbitration Burst: 1 00:16:34.246 00:16:34.246 Power Management 00:16:34.246 ================ 00:16:34.246 Number of Power States: 1 00:16:34.246 Current Power State: Power State #0 00:16:34.246 Power State #0: 00:16:34.246 Max Power: 0.00 W 00:16:34.246 Non-Operational State: Operational 00:16:34.246 Entry Latency: Not Reported 00:16:34.246 Exit Latency: Not Reported 00:16:34.246 Relative Read Throughput: 0 00:16:34.246 Relative Read Latency: 0 00:16:34.246 Relative Write Throughput: 0 00:16:34.246 Relative Write Latency: 0 00:16:34.246 Idle Power: Not Reported 00:16:34.246 Active Power: Not Reported 00:16:34.246 Non-Operational Permissive Mode: Not Supported 00:16:34.246 00:16:34.246 Health Information 00:16:34.246 ================== 00:16:34.246 Critical Warnings: 00:16:34.246 Available Spare Space: OK 00:16:34.246 Temperature: OK 00:16:34.246 Device Reliability: OK 00:16:34.246 Read Only: No 00:16:34.246 Volatile Memory Backup: OK 00:16:34.246 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:34.246 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:34.246 Available Spare: 0% 00:16:34.246 Available Sp[2024-11-20 10:34:06.464261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:34.246 [2024-11-20 10:34:06.472163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:34.246 [2024-11-20 10:34:06.472187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:34.246 [2024-11-20 10:34:06.472194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.246 [2024-11-20 10:34:06.472198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.246 [2024-11-20 10:34:06.472204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.246 [2024-11-20 10:34:06.472209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.246 [2024-11-20 10:34:06.472237] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:34.246 [2024-11-20 10:34:06.472244] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:34.246 [2024-11-20 10:34:06.473248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:34.246 [2024-11-20 10:34:06.473284] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:34.246 [2024-11-20 10:34:06.473289] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:34.246 [2024-11-20 10:34:06.474249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:34.246 [2024-11-20 10:34:06.474258] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:34.246 [2024-11-20 10:34:06.474299] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:34.246 [2024-11-20 10:34:06.475419] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:34.246 are Threshold: 0% 00:16:34.246 Life Percentage Used: 0% 00:16:34.246 Data Units Read: 0 00:16:34.246 Data Units Written: 0 00:16:34.246 Host Read Commands: 0 00:16:34.246 Host Write Commands: 0 00:16:34.246 Controller Busy Time: 0 minutes 00:16:34.246 Power Cycles: 0 00:16:34.246 Power On Hours: 0 hours 00:16:34.246 Unsafe Shutdowns: 0 00:16:34.246 Unrecoverable Media Errors: 0 00:16:34.246 Lifetime Error Log Entries: 0 00:16:34.246 Warning Temperature Time: 0 minutes 00:16:34.246 Critical Temperature Time: 0 minutes 00:16:34.246 00:16:34.246 Number of Queues 00:16:34.246 ================ 00:16:34.246 Number of I/O Submission Queues: 127 00:16:34.246 Number of I/O Completion Queues: 127 00:16:34.246 00:16:34.246 Active Namespaces 00:16:34.246 ================= 00:16:34.246 Namespace ID:1 00:16:34.246 Error Recovery Timeout: Unlimited 00:16:34.246 Command Set Identifier: NVM (00h) 00:16:34.246 Deallocate: Supported 00:16:34.246 Deallocated/Unwritten Error: Not Supported 00:16:34.246 Deallocated Read Value: Unknown 00:16:34.246 Deallocate in Write Zeroes: Not Supported 00:16:34.246 Deallocated Guard Field: 0xFFFF 00:16:34.246 Flush: Supported 00:16:34.246 Reservation: Supported 00:16:34.246 Namespace Sharing Capabilities: Multiple Controllers 00:16:34.246 Size (in LBAs): 131072 (0GiB) 00:16:34.246 Capacity (in LBAs): 131072 (0GiB) 00:16:34.246 Utilization (in LBAs): 131072 (0GiB) 00:16:34.246 NGUID: AEADAF84A8DE44E78FB8ACDA6FB7BEB2 00:16:34.246 UUID: aeadaf84-a8de-44e7-8fb8-acda6fb7beb2 00:16:34.246 Thin Provisioning: Not Supported 00:16:34.246 Per-NS Atomic Units: Yes 00:16:34.246 Atomic Boundary Size (Normal): 0 00:16:34.246 Atomic Boundary Size (PFail): 0 00:16:34.246 Atomic Boundary Offset: 0 00:16:34.246 Maximum Single Source Range Length: 65535 00:16:34.246 Maximum Copy Length: 65535 00:16:34.246 Maximum Source Range Count: 1 00:16:34.246 NGUID/EUI64 Never Reused: No 00:16:34.246 Namespace Write Protected: No 00:16:34.246 Number of LBA Formats: 1 00:16:34.246 Current LBA Format: LBA Format #00 00:16:34.246 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:34.246 00:16:34.246 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:34.506 [2024-11-20 10:34:06.661224] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:39.791 Initializing NVMe Controllers 00:16:39.791 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:39.791 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:39.791 Initialization complete. Launching workers. 00:16:39.791 ======================================================== 00:16:39.791 Latency(us) 00:16:39.791 Device Information : IOPS MiB/s Average min max 00:16:39.791 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40030.17 156.37 3197.46 844.61 7789.48 00:16:39.791 ======================================================== 00:16:39.791 Total : 40030.17 156.37 3197.46 844.61 7789.48 00:16:39.791 00:16:39.791 [2024-11-20 10:34:11.764368] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:39.791 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:39.791 [2024-11-20 10:34:11.964983] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:45.068 Initializing NVMe Controllers 00:16:45.068 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:45.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:45.068 Initialization complete. Launching workers. 00:16:45.068 ======================================================== 00:16:45.068 Latency(us) 00:16:45.068 Device Information : IOPS MiB/s Average min max 00:16:45.068 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40054.80 156.46 3198.34 847.43 6933.00 00:16:45.068 ======================================================== 00:16:45.068 Total : 40054.80 156.46 3198.34 847.43 6933.00 00:16:45.068 00:16:45.068 [2024-11-20 10:34:16.985634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:45.068 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:45.068 [2024-11-20 10:34:17.186806] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:50.354 [2024-11-20 10:34:22.337237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:50.354 Initializing NVMe Controllers 00:16:50.354 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:50.354 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:50.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:50.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:50.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:50.354 Initialization complete. Launching workers. 00:16:50.354 Starting thread on core 2 00:16:50.354 Starting thread on core 3 00:16:50.354 Starting thread on core 1 00:16:50.354 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:50.354 [2024-11-20 10:34:22.584472] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:53.650 [2024-11-20 10:34:25.648354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:53.650 Initializing NVMe Controllers 00:16:53.650 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.650 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.650 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:53.650 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:53.650 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:53.650 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:53.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:53.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:53.650 Initialization complete. Launching workers. 00:16:53.650 Starting thread on core 1 with urgent priority queue 00:16:53.650 Starting thread on core 2 with urgent priority queue 00:16:53.650 Starting thread on core 3 with urgent priority queue 00:16:53.650 Starting thread on core 0 with urgent priority queue 00:16:53.650 SPDK bdev Controller (SPDK2 ) core 0: 16130.00 IO/s 6.20 secs/100000 ios 00:16:53.650 SPDK bdev Controller (SPDK2 ) core 1: 8224.33 IO/s 12.16 secs/100000 ios 00:16:53.650 SPDK bdev Controller (SPDK2 ) core 2: 12449.67 IO/s 8.03 secs/100000 ios 00:16:53.650 SPDK bdev Controller (SPDK2 ) core 3: 13248.00 IO/s 7.55 secs/100000 ios 00:16:53.650 ======================================================== 00:16:53.650 00:16:53.650 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:53.650 [2024-11-20 10:34:25.882990] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:53.650 Initializing NVMe Controllers 00:16:53.650 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.650 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.650 Namespace ID: 1 size: 0GB 00:16:53.650 Initialization complete. 00:16:53.650 INFO: using host memory buffer for IO 00:16:53.650 Hello world! 00:16:53.650 [2024-11-20 10:34:25.893060] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:53.650 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:53.912 [2024-11-20 10:34:26.134305] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.851 Initializing NVMe Controllers 00:16:54.851 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.851 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.851 Initialization complete. Launching workers. 00:16:54.851 submit (in ns) avg, min, max = 5329.1, 2821.7, 3998825.8 00:16:54.851 complete (in ns) avg, min, max = 17889.8, 1642.5, 3997983.3 00:16:54.851 00:16:54.851 Submit histogram 00:16:54.851 ================ 00:16:54.851 Range in us Cumulative Count 00:16:54.852 2.813 - 2.827: 0.1474% ( 30) 00:16:54.852 2.827 - 2.840: 1.1647% ( 207) 00:16:54.852 2.840 - 2.853: 3.3269% ( 440) 00:16:54.852 2.853 - 2.867: 6.9438% ( 736) 00:16:54.852 2.867 - 2.880: 11.3519% ( 897) 00:16:54.852 2.880 - 2.893: 17.0082% ( 1151) 00:16:54.852 2.893 - 2.907: 22.3598% ( 1089) 00:16:54.852 2.907 - 2.920: 29.1464% ( 1381) 00:16:54.852 2.920 - 2.933: 35.0238% ( 1196) 00:16:54.852 2.933 - 2.947: 39.6531% ( 942) 00:16:54.852 2.947 - 2.960: 44.4002% ( 966) 00:16:54.852 2.960 - 2.973: 50.1204% ( 1164) 00:16:54.852 2.973 - 2.987: 55.7767% ( 1151) 00:16:54.852 2.987 - 3.000: 64.3177% ( 1738) 00:16:54.852 3.000 - 3.013: 72.6473% ( 1695) 00:16:54.852 3.013 - 3.027: 80.4757% ( 1593) 00:16:54.852 3.027 - 3.040: 87.4588% ( 1421) 00:16:54.852 3.040 - 3.053: 92.7613% ( 1079) 00:16:54.852 3.053 - 3.067: 96.1669% ( 693) 00:16:54.852 3.067 - 3.080: 98.0736% ( 388) 00:16:54.852 3.080 - 3.093: 98.9926% ( 187) 00:16:54.852 3.093 - 3.107: 99.3464% ( 72) 00:16:54.852 3.107 - 3.120: 99.4840% ( 28) 00:16:54.852 3.120 - 3.133: 99.5528% ( 14) 00:16:54.852 3.133 - 3.147: 99.5823% ( 6) 00:16:54.852 3.147 - 3.160: 99.5921% ( 2) 00:16:54.852 3.187 - 3.200: 99.5970% ( 1) 00:16:54.852 3.200 - 3.213: 99.6019% ( 1) 00:16:54.852 3.280 - 3.293: 99.6069% ( 1) 00:16:54.852 3.360 - 3.373: 99.6118% ( 1) 00:16:54.852 3.387 - 3.400: 99.6167% ( 1) 00:16:54.852 3.413 - 3.440: 99.6216% ( 1) 00:16:54.852 3.547 - 3.573: 99.6265% ( 1) 00:16:54.852 3.653 - 3.680: 99.6314% ( 1) 00:16:54.852 3.707 - 3.733: 99.6363% ( 1) 00:16:54.852 3.787 - 3.813: 99.6413% ( 1) 00:16:54.852 3.840 - 3.867: 99.6462% ( 1) 00:16:54.852 3.867 - 3.893: 99.6511% ( 1) 00:16:54.852 4.107 - 4.133: 99.6560% ( 1) 00:16:54.852 4.480 - 4.507: 99.6609% ( 1) 00:16:54.852 4.560 - 4.587: 99.6658% ( 1) 00:16:54.852 4.693 - 4.720: 99.6707% ( 1) 00:16:54.852 4.720 - 4.747: 99.6757% ( 1) 00:16:54.852 4.800 - 4.827: 99.6806% ( 1) 00:16:54.852 4.880 - 4.907: 99.6855% ( 1) 00:16:54.852 4.933 - 4.960: 99.6904% ( 1) 00:16:54.852 4.960 - 4.987: 99.6953% ( 1) 00:16:54.852 4.987 - 5.013: 99.7051% ( 2) 00:16:54.852 5.013 - 5.040: 99.7101% ( 1) 00:16:54.852 5.067 - 5.093: 99.7150% ( 1) 00:16:54.852 5.093 - 5.120: 99.7199% ( 1) 00:16:54.852 5.147 - 5.173: 99.7297% ( 2) 00:16:54.852 5.200 - 5.227: 99.7395% ( 2) 00:16:54.852 5.227 - 5.253: 99.7445% ( 1) 00:16:54.852 5.360 - 5.387: 99.7494% ( 1) 00:16:54.852 5.387 - 5.413: 99.7543% ( 1) 00:16:54.852 5.413 - 5.440: 99.7592% ( 1) 00:16:54.852 5.440 - 5.467: 99.7641% ( 1) 00:16:54.852 5.493 - 5.520: 99.7690% ( 1) 00:16:54.852 5.547 - 5.573: 99.7838% ( 3) 00:16:54.852 5.600 - 5.627: 99.7936% ( 2) 00:16:54.852 5.653 - 5.680: 99.7985% ( 1) 00:16:54.852 5.813 - 5.840: 99.8034% ( 1) 00:16:54.852 5.840 - 5.867: 99.8083% ( 1) 00:16:54.852 5.947 - 5.973: 99.8182% ( 2) 00:16:54.852 5.973 - 6.000: 99.8231% ( 1) 00:16:54.852 6.053 - 6.080: 99.8329% ( 2) 00:16:54.852 6.080 - 6.107: 99.8378% ( 1) 00:16:54.852 6.107 - 6.133: 99.8427% ( 1) 00:16:54.852 6.133 - 6.160: 99.8526% ( 2) 00:16:54.852 6.160 - 6.187: 99.8624% ( 2) 00:16:54.852 6.187 - 6.213: 99.8673% ( 1) 00:16:54.852 6.240 - 6.267: 99.8722% ( 1) 00:16:54.852 6.267 - 6.293: 99.8771% ( 1) 00:16:54.852 6.320 - 6.347: 99.8821% ( 1) 00:16:54.852 6.400 - 6.427: 99.8870% ( 1) 00:16:54.852 6.427 - 6.453: 99.8919% ( 1) 00:16:54.852 6.453 - 6.480: 99.8968% ( 1) 00:16:54.852 6.480 - 6.507: 99.9017% ( 1) 00:16:55.132 [2024-11-20 10:34:27.226679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.132 6.747 - 6.773: 99.9115% ( 2) 00:16:55.132 6.773 - 6.800: 99.9214% ( 2) 00:16:55.132 7.147 - 7.200: 99.9263% ( 1) 00:16:55.132 9.547 - 9.600: 99.9312% ( 1) 00:16:55.132 10.667 - 10.720: 99.9361% ( 1) 00:16:55.132 11.627 - 11.680: 99.9410% ( 1) 00:16:55.132 3986.773 - 4014.080: 100.0000% ( 12) 00:16:55.132 00:16:55.132 Complete histogram 00:16:55.132 ================== 00:16:55.132 Range in us Cumulative Count 00:16:55.132 1.640 - 1.647: 0.3096% ( 63) 00:16:55.132 1.647 - 1.653: 0.6733% ( 74) 00:16:55.132 1.653 - 1.660: 0.6880% ( 3) 00:16:55.132 1.660 - 1.667: 0.7371% ( 10) 00:16:55.132 1.667 - 1.673: 0.8354% ( 20) 00:16:55.132 1.673 - 1.680: 0.8600% ( 5) 00:16:55.132 1.680 - 1.687: 0.9042% ( 9) 00:16:55.132 1.693 - 1.700: 1.2089% ( 62) 00:16:55.132 1.700 - 1.707: 35.3039% ( 6938) 00:16:55.132 1.707 - 1.720: 56.8775% ( 4390) 00:16:55.132 1.720 - 1.733: 74.8194% ( 3651) 00:16:55.132 1.733 - 1.747: 81.8124% ( 1423) 00:16:55.132 1.747 - 1.760: 83.8026% ( 405) 00:16:55.132 1.760 - 1.773: 87.7046% ( 794) 00:16:55.132 1.773 - 1.787: 93.1348% ( 1105) 00:16:55.132 1.787 - 1.800: 96.7861% ( 743) 00:16:55.132 1.800 - 1.813: 98.6732% ( 384) 00:16:55.132 1.813 - 1.827: 99.2825% ( 124) 00:16:55.132 1.827 - 1.840: 99.4250% ( 29) 00:16:55.132 1.840 - 1.853: 99.4643% ( 8) 00:16:55.132 1.853 - 1.867: 99.4693% ( 1) 00:16:55.132 3.653 - 3.680: 99.4742% ( 1) 00:16:55.132 3.840 - 3.867: 99.4791% ( 1) 00:16:55.132 3.947 - 3.973: 99.4840% ( 1) 00:16:55.132 3.973 - 4.000: 99.4889% ( 1) 00:16:55.132 4.027 - 4.053: 99.4987% ( 2) 00:16:55.132 4.053 - 4.080: 99.5086% ( 2) 00:16:55.132 4.133 - 4.160: 99.5135% ( 1) 00:16:55.132 4.293 - 4.320: 99.5233% ( 2) 00:16:55.132 4.373 - 4.400: 99.5282% ( 1) 00:16:55.132 4.400 - 4.427: 99.5331% ( 1) 00:16:55.132 4.667 - 4.693: 99.5381% ( 1) 00:16:55.132 4.693 - 4.720: 99.5430% ( 1) 00:16:55.132 4.800 - 4.827: 99.5479% ( 1) 00:16:55.132 4.853 - 4.880: 99.5528% ( 1) 00:16:55.132 4.933 - 4.960: 99.5626% ( 2) 00:16:55.132 5.013 - 5.040: 99.5675% ( 1) 00:16:55.132 5.067 - 5.093: 99.5725% ( 1) 00:16:55.132 5.227 - 5.253: 99.5774% ( 1) 00:16:55.132 5.387 - 5.413: 99.5872% ( 2) 00:16:55.132 34.347 - 34.560: 99.5921% ( 1) 00:16:55.132 1297.067 - 1303.893: 99.5970% ( 1) 00:16:55.132 3986.773 - 4014.080: 100.0000% ( 82) 00:16:55.132 00:16:55.132 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:55.132 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:55.132 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:55.132 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:55.132 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:55.132 [ 00:16:55.132 { 00:16:55.132 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:55.132 "subtype": "Discovery", 00:16:55.132 "listen_addresses": [], 00:16:55.132 "allow_any_host": true, 00:16:55.132 "hosts": [] 00:16:55.132 }, 00:16:55.132 { 00:16:55.132 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:55.132 "subtype": "NVMe", 00:16:55.132 "listen_addresses": [ 00:16:55.132 { 00:16:55.132 "trtype": "VFIOUSER", 00:16:55.132 "adrfam": "IPv4", 00:16:55.132 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:55.132 "trsvcid": "0" 00:16:55.132 } 00:16:55.132 ], 00:16:55.132 "allow_any_host": true, 00:16:55.132 "hosts": [], 00:16:55.132 "serial_number": "SPDK1", 00:16:55.132 "model_number": "SPDK bdev Controller", 00:16:55.132 "max_namespaces": 32, 00:16:55.132 "min_cntlid": 1, 00:16:55.132 "max_cntlid": 65519, 00:16:55.132 "namespaces": [ 00:16:55.132 { 00:16:55.132 "nsid": 1, 00:16:55.132 "bdev_name": "Malloc1", 00:16:55.132 "name": "Malloc1", 00:16:55.132 "nguid": "2D84C8CFDF4549F49AA7BA0C593764FB", 00:16:55.132 "uuid": "2d84c8cf-df45-49f4-9aa7-ba0c593764fb" 00:16:55.132 }, 00:16:55.132 { 00:16:55.132 "nsid": 2, 00:16:55.132 "bdev_name": "Malloc3", 00:16:55.132 "name": "Malloc3", 00:16:55.132 "nguid": "2B69EFEADAA548F8AB772E7DC2B29496", 00:16:55.132 "uuid": "2b69efea-daa5-48f8-ab77-2e7dc2b29496" 00:16:55.132 } 00:16:55.132 ] 00:16:55.132 }, 00:16:55.132 { 00:16:55.132 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:55.132 "subtype": "NVMe", 00:16:55.132 "listen_addresses": [ 00:16:55.132 { 00:16:55.132 "trtype": "VFIOUSER", 00:16:55.132 "adrfam": "IPv4", 00:16:55.132 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:55.132 "trsvcid": "0" 00:16:55.132 } 00:16:55.132 ], 00:16:55.132 "allow_any_host": true, 00:16:55.132 "hosts": [], 00:16:55.132 "serial_number": "SPDK2", 00:16:55.132 "model_number": "SPDK bdev Controller", 00:16:55.132 "max_namespaces": 32, 00:16:55.132 "min_cntlid": 1, 00:16:55.132 "max_cntlid": 65519, 00:16:55.132 "namespaces": [ 00:16:55.132 { 00:16:55.132 "nsid": 1, 00:16:55.132 "bdev_name": "Malloc2", 00:16:55.132 "name": "Malloc2", 00:16:55.132 "nguid": "AEADAF84A8DE44E78FB8ACDA6FB7BEB2", 00:16:55.132 "uuid": "aeadaf84-a8de-44e7-8fb8-acda6fb7beb2" 00:16:55.132 } 00:16:55.133 ] 00:16:55.133 } 00:16:55.133 ] 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2012888 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:55.133 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:55.392 [2024-11-20 10:34:27.596479] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.392 Malloc4 00:16:55.392 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:55.652 [2024-11-20 10:34:27.797749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.652 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:55.652 Asynchronous Event Request test 00:16:55.652 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.652 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.652 Registering asynchronous event callbacks... 00:16:55.652 Starting namespace attribute notice tests for all controllers... 00:16:55.652 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:55.652 aer_cb - Changed Namespace 00:16:55.652 Cleaning up... 00:16:55.652 [ 00:16:55.652 { 00:16:55.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:55.652 "subtype": "Discovery", 00:16:55.652 "listen_addresses": [], 00:16:55.652 "allow_any_host": true, 00:16:55.652 "hosts": [] 00:16:55.652 }, 00:16:55.652 { 00:16:55.652 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:55.652 "subtype": "NVMe", 00:16:55.652 "listen_addresses": [ 00:16:55.652 { 00:16:55.652 "trtype": "VFIOUSER", 00:16:55.652 "adrfam": "IPv4", 00:16:55.652 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:55.652 "trsvcid": "0" 00:16:55.652 } 00:16:55.652 ], 00:16:55.652 "allow_any_host": true, 00:16:55.652 "hosts": [], 00:16:55.652 "serial_number": "SPDK1", 00:16:55.652 "model_number": "SPDK bdev Controller", 00:16:55.652 "max_namespaces": 32, 00:16:55.652 "min_cntlid": 1, 00:16:55.652 "max_cntlid": 65519, 00:16:55.652 "namespaces": [ 00:16:55.652 { 00:16:55.652 "nsid": 1, 00:16:55.652 "bdev_name": "Malloc1", 00:16:55.652 "name": "Malloc1", 00:16:55.652 "nguid": "2D84C8CFDF4549F49AA7BA0C593764FB", 00:16:55.652 "uuid": "2d84c8cf-df45-49f4-9aa7-ba0c593764fb" 00:16:55.652 }, 00:16:55.652 { 00:16:55.652 "nsid": 2, 00:16:55.652 "bdev_name": "Malloc3", 00:16:55.652 "name": "Malloc3", 00:16:55.652 "nguid": "2B69EFEADAA548F8AB772E7DC2B29496", 00:16:55.652 "uuid": "2b69efea-daa5-48f8-ab77-2e7dc2b29496" 00:16:55.652 } 00:16:55.652 ] 00:16:55.652 }, 00:16:55.652 { 00:16:55.652 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:55.652 "subtype": "NVMe", 00:16:55.652 "listen_addresses": [ 00:16:55.652 { 00:16:55.652 "trtype": "VFIOUSER", 00:16:55.652 "adrfam": "IPv4", 00:16:55.652 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:55.652 "trsvcid": "0" 00:16:55.652 } 00:16:55.652 ], 00:16:55.652 "allow_any_host": true, 00:16:55.652 "hosts": [], 00:16:55.652 "serial_number": "SPDK2", 00:16:55.652 "model_number": "SPDK bdev Controller", 00:16:55.652 "max_namespaces": 32, 00:16:55.652 "min_cntlid": 1, 00:16:55.652 "max_cntlid": 65519, 00:16:55.652 "namespaces": [ 00:16:55.652 { 00:16:55.652 "nsid": 1, 00:16:55.652 "bdev_name": "Malloc2", 00:16:55.652 "name": "Malloc2", 00:16:55.652 "nguid": "AEADAF84A8DE44E78FB8ACDA6FB7BEB2", 00:16:55.652 "uuid": "aeadaf84-a8de-44e7-8fb8-acda6fb7beb2" 00:16:55.652 }, 00:16:55.652 { 00:16:55.652 "nsid": 2, 00:16:55.652 "bdev_name": "Malloc4", 00:16:55.652 "name": "Malloc4", 00:16:55.652 "nguid": "16DADBD0D0A04ABE95C8399DFDAF3B0C", 00:16:55.652 "uuid": "16dadbd0-d0a0-4abe-95c8-399dfdaf3b0c" 00:16:55.652 } 00:16:55.652 ] 00:16:55.652 } 00:16:55.652 ] 00:16:55.652 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2012888 00:16:55.652 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:55.652 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2003832 00:16:55.652 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2003832 ']' 00:16:55.652 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2003832 00:16:55.652 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:55.652 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.653 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2003832 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2003832' 00:16:55.917 killing process with pid 2003832 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2003832 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2003832 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2013084 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2013084' 00:16:55.917 Process pid: 2013084 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2013084 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2013084 ']' 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.917 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:55.917 [2024-11-20 10:34:28.271901] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:55.917 [2024-11-20 10:34:28.272843] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:55.917 [2024-11-20 10:34:28.272887] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.217 [2024-11-20 10:34:28.359000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.217 [2024-11-20 10:34:28.394095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.217 [2024-11-20 10:34:28.394130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.217 [2024-11-20 10:34:28.394140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.217 [2024-11-20 10:34:28.394144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.217 [2024-11-20 10:34:28.394148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.217 [2024-11-20 10:34:28.395409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.217 [2024-11-20 10:34:28.395441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.217 [2024-11-20 10:34:28.395564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.217 [2024-11-20 10:34:28.395567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.217 [2024-11-20 10:34:28.448506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:56.217 [2024-11-20 10:34:28.449343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:56.217 [2024-11-20 10:34:28.450535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:56.217 [2024-11-20 10:34:28.450635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:56.217 [2024-11-20 10:34:28.450688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:56.217 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.217 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:56.217 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:57.213 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:57.475 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:57.475 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:57.475 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:57.475 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:57.475 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:57.735 Malloc1 00:16:57.735 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:57.995 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:57.995 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:58.255 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:58.255 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:58.255 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:58.516 Malloc2 00:16:58.516 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:58.516 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:58.776 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2013084 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2013084 ']' 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2013084 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013084 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013084' 00:16:59.036 killing process with pid 2013084 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2013084 00:16:59.036 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2013084 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:59.296 00:16:59.296 real 0m50.413s 00:16:59.296 user 3m15.229s 00:16:59.296 sys 0m2.704s 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:59.296 ************************************ 00:16:59.296 END TEST nvmf_vfio_user 00:16:59.296 ************************************ 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.296 ************************************ 00:16:59.296 START TEST nvmf_vfio_user_nvme_compliance 00:16:59.296 ************************************ 00:16:59.296 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:59.296 * Looking for test storage... 00:16:59.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:59.297 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.297 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.297 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:59.558 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.559 --rc genhtml_branch_coverage=1 00:16:59.559 --rc genhtml_function_coverage=1 00:16:59.559 --rc genhtml_legend=1 00:16:59.559 --rc geninfo_all_blocks=1 00:16:59.559 --rc geninfo_unexecuted_blocks=1 00:16:59.559 00:16:59.559 ' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.559 --rc genhtml_branch_coverage=1 00:16:59.559 --rc genhtml_function_coverage=1 00:16:59.559 --rc genhtml_legend=1 00:16:59.559 --rc geninfo_all_blocks=1 00:16:59.559 --rc geninfo_unexecuted_blocks=1 00:16:59.559 00:16:59.559 ' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.559 --rc genhtml_branch_coverage=1 00:16:59.559 --rc genhtml_function_coverage=1 00:16:59.559 --rc genhtml_legend=1 00:16:59.559 --rc geninfo_all_blocks=1 00:16:59.559 --rc geninfo_unexecuted_blocks=1 00:16:59.559 00:16:59.559 ' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.559 --rc genhtml_branch_coverage=1 00:16:59.559 --rc genhtml_function_coverage=1 00:16:59.559 --rc genhtml_legend=1 00:16:59.559 --rc geninfo_all_blocks=1 00:16:59.559 --rc geninfo_unexecuted_blocks=1 00:16:59.559 00:16:59.559 ' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2013849 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2013849' 00:16:59.559 Process pid: 2013849 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2013849 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2013849 ']' 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.559 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.559 [2024-11-20 10:34:31.800102] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:16:59.559 [2024-11-20 10:34:31.800177] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.559 [2024-11-20 10:34:31.886558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.560 [2024-11-20 10:34:31.927276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.560 [2024-11-20 10:34:31.927318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.560 [2024-11-20 10:34:31.927324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.560 [2024-11-20 10:34:31.927329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.560 [2024-11-20 10:34:31.927333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.560 [2024-11-20 10:34:31.928820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.560 [2024-11-20 10:34:31.929008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.560 [2024-11-20 10:34:31.929010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.499 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.499 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:00.499 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.439 malloc0 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.439 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:01.439 00:17:01.440 00:17:01.440 CUnit - A unit testing framework for C - Version 2.1-3 00:17:01.440 http://cunit.sourceforge.net/ 00:17:01.440 00:17:01.440 00:17:01.440 Suite: nvme_compliance 00:17:01.699 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 10:34:33.855561] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.699 [2024-11-20 10:34:33.856856] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:01.699 [2024-11-20 10:34:33.856867] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:01.699 [2024-11-20 10:34:33.856872] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:01.699 [2024-11-20 10:34:33.858583] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.699 passed 00:17:01.699 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 10:34:33.937061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.699 [2024-11-20 10:34:33.940084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.699 passed 00:17:01.699 Test: admin_identify_ns ...[2024-11-20 10:34:34.014512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.959 [2024-11-20 10:34:34.078171] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:01.959 [2024-11-20 10:34:34.086168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:01.959 [2024-11-20 10:34:34.107250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.959 passed 00:17:01.959 Test: admin_get_features_mandatory_features ...[2024-11-20 10:34:34.181491] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.959 [2024-11-20 10:34:34.184517] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.959 passed 00:17:01.959 Test: admin_get_features_optional_features ...[2024-11-20 10:34:34.260971] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.959 [2024-11-20 10:34:34.266002] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.959 passed 00:17:02.218 Test: admin_set_features_number_of_queues ...[2024-11-20 10:34:34.339710] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.218 [2024-11-20 10:34:34.444254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.218 passed 00:17:02.218 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 10:34:34.520298] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.219 [2024-11-20 10:34:34.523325] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.219 passed 00:17:02.479 Test: admin_get_log_page_with_lpo ...[2024-11-20 10:34:34.598068] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.479 [2024-11-20 10:34:34.665169] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:02.479 [2024-11-20 10:34:34.678210] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.479 passed 00:17:02.479 Test: fabric_property_get ...[2024-11-20 10:34:34.754279] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.479 [2024-11-20 10:34:34.755479] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:02.479 [2024-11-20 10:34:34.757300] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.479 passed 00:17:02.479 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 10:34:34.832771] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.479 [2024-11-20 10:34:34.833972] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:02.479 [2024-11-20 10:34:34.835791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.740 passed 00:17:02.740 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 10:34:34.911531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.740 [2024-11-20 10:34:34.999167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:02.740 [2024-11-20 10:34:35.015163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:02.740 [2024-11-20 10:34:35.020244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.740 passed 00:17:02.740 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 10:34:35.092481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.740 [2024-11-20 10:34:35.093680] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:02.740 [2024-11-20 10:34:35.095504] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.001 passed 00:17:03.001 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 10:34:35.171237] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.001 [2024-11-20 10:34:35.249168] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:03.001 [2024-11-20 10:34:35.273163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:03.001 [2024-11-20 10:34:35.278235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.001 passed 00:17:03.001 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 10:34:35.350413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.001 [2024-11-20 10:34:35.351625] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:03.001 [2024-11-20 10:34:35.351643] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:03.001 [2024-11-20 10:34:35.353429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.261 passed 00:17:03.261 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 10:34:35.430163] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.261 [2024-11-20 10:34:35.524166] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:03.261 [2024-11-20 10:34:35.532166] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:03.261 [2024-11-20 10:34:35.540162] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:03.261 [2024-11-20 10:34:35.548161] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:03.261 [2024-11-20 10:34:35.577233] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.261 passed 00:17:03.521 Test: admin_create_io_sq_verify_pc ...[2024-11-20 10:34:35.650367] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.521 [2024-11-20 10:34:35.669171] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:03.521 [2024-11-20 10:34:35.686531] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.521 passed 00:17:03.521 Test: admin_create_io_qp_max_qps ...[2024-11-20 10:34:35.761984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.906 [2024-11-20 10:34:36.869170] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:04.906 [2024-11-20 10:34:37.245662] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.906 passed 00:17:05.166 Test: admin_create_io_sq_shared_cq ...[2024-11-20 10:34:37.321413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.166 [2024-11-20 10:34:37.453162] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:05.166 [2024-11-20 10:34:37.490216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.166 passed 00:17:05.166 00:17:05.166 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.166 suites 1 1 n/a 0 0 00:17:05.166 tests 18 18 18 0 0 00:17:05.166 asserts 360 360 360 0 n/a 00:17:05.166 00:17:05.166 Elapsed time = 1.492 seconds 00:17:05.166 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2013849 00:17:05.166 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2013849 ']' 00:17:05.166 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2013849 00:17:05.166 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:05.166 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013849 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013849' 00:17:05.427 killing process with pid 2013849 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2013849 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2013849 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:05.427 00:17:05.427 real 0m6.202s 00:17:05.427 user 0m17.554s 00:17:05.427 sys 0m0.557s 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:05.427 ************************************ 00:17:05.427 END TEST nvmf_vfio_user_nvme_compliance 00:17:05.427 ************************************ 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.427 ************************************ 00:17:05.427 START TEST nvmf_vfio_user_fuzz 00:17:05.427 ************************************ 00:17:05.427 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:05.689 * Looking for test storage... 00:17:05.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:05.689 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.690 --rc genhtml_branch_coverage=1 00:17:05.690 --rc genhtml_function_coverage=1 00:17:05.690 --rc genhtml_legend=1 00:17:05.690 --rc geninfo_all_blocks=1 00:17:05.690 --rc geninfo_unexecuted_blocks=1 00:17:05.690 00:17:05.690 ' 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.690 --rc genhtml_branch_coverage=1 00:17:05.690 --rc genhtml_function_coverage=1 00:17:05.690 --rc genhtml_legend=1 00:17:05.690 --rc geninfo_all_blocks=1 00:17:05.690 --rc geninfo_unexecuted_blocks=1 00:17:05.690 00:17:05.690 ' 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.690 --rc genhtml_branch_coverage=1 00:17:05.690 --rc genhtml_function_coverage=1 00:17:05.690 --rc genhtml_legend=1 00:17:05.690 --rc geninfo_all_blocks=1 00:17:05.690 --rc geninfo_unexecuted_blocks=1 00:17:05.690 00:17:05.690 ' 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.690 --rc genhtml_branch_coverage=1 00:17:05.690 --rc genhtml_function_coverage=1 00:17:05.690 --rc genhtml_legend=1 00:17:05.690 --rc geninfo_all_blocks=1 00:17:05.690 --rc geninfo_unexecuted_blocks=1 00:17:05.690 00:17:05.690 ' 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.690 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2015063 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2015063' 00:17:05.690 Process pid: 2015063 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2015063 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2015063 ']' 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.690 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.691 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.691 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.691 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.632 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.632 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:06.632 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.572 malloc0 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.572 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:07.832 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.832 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:07.832 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:39.949 Fuzzing completed. Shutting down the fuzz application 00:17:39.949 00:17:39.949 Dumping successful admin opcodes: 00:17:39.949 8, 9, 10, 24, 00:17:39.949 Dumping successful io opcodes: 00:17:39.949 0, 00:17:39.949 NS: 0x20000081ef00 I/O qp, Total commands completed: 1291105, total successful commands: 5065, random_seed: 993853568 00:17:39.949 NS: 0x20000081ef00 admin qp, Total commands completed: 295697, total successful commands: 2389, random_seed: 2752389568 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2015063 ']' 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015063' 00:17:39.949 killing process with pid 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2015063 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:39.949 00:17:39.949 real 0m32.783s 00:17:39.949 user 0m34.778s 00:17:39.949 sys 0m26.267s 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.949 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:39.949 ************************************ 00:17:39.950 END TEST nvmf_vfio_user_fuzz 00:17:39.950 ************************************ 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.950 ************************************ 00:17:39.950 START TEST nvmf_auth_target 00:17:39.950 ************************************ 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:39.950 * Looking for test storage... 00:17:39.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.950 --rc genhtml_branch_coverage=1 00:17:39.950 --rc genhtml_function_coverage=1 00:17:39.950 --rc genhtml_legend=1 00:17:39.950 --rc geninfo_all_blocks=1 00:17:39.950 --rc geninfo_unexecuted_blocks=1 00:17:39.950 00:17:39.950 ' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.950 --rc genhtml_branch_coverage=1 00:17:39.950 --rc genhtml_function_coverage=1 00:17:39.950 --rc genhtml_legend=1 00:17:39.950 --rc geninfo_all_blocks=1 00:17:39.950 --rc geninfo_unexecuted_blocks=1 00:17:39.950 00:17:39.950 ' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.950 --rc genhtml_branch_coverage=1 00:17:39.950 --rc genhtml_function_coverage=1 00:17:39.950 --rc genhtml_legend=1 00:17:39.950 --rc geninfo_all_blocks=1 00:17:39.950 --rc geninfo_unexecuted_blocks=1 00:17:39.950 00:17:39.950 ' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.950 --rc genhtml_branch_coverage=1 00:17:39.950 --rc genhtml_function_coverage=1 00:17:39.950 --rc genhtml_legend=1 00:17:39.950 --rc geninfo_all_blocks=1 00:17:39.950 --rc geninfo_unexecuted_blocks=1 00:17:39.950 00:17:39.950 ' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.950 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.951 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:46.542 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.542 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:46.542 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.542 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.542 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.542 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:46.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:46.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:46.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:46.543 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:17:46.543 00:17:46.543 --- 10.0.0.2 ping statistics --- 00:17:46.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.543 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:17:46.543 00:17:46.543 --- 10.0.0.1 ping statistics --- 00:17:46.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.543 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.543 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2025056 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2025056 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2025056 ']' 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.544 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2025395 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64fae15cf7cb6d906f78aadb43cd3cfec201eb32b95f03b3 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DXX 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64fae15cf7cb6d906f78aadb43cd3cfec201eb32b95f03b3 0 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64fae15cf7cb6d906f78aadb43cd3cfec201eb32b95f03b3 0 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64fae15cf7cb6d906f78aadb43cd3cfec201eb32b95f03b3 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DXX 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DXX 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DXX 00:17:47.116 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3de2bc4c49abd9210c467e93e80febda0b6e02597f110d58a24fd360df30f01c 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.U1Q 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3de2bc4c49abd9210c467e93e80febda0b6e02597f110d58a24fd360df30f01c 3 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3de2bc4c49abd9210c467e93e80febda0b6e02597f110d58a24fd360df30f01c 3 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3de2bc4c49abd9210c467e93e80febda0b6e02597f110d58a24fd360df30f01c 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.U1Q 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.U1Q 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.U1Q 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ca775e554074998bbee557d1fd305c6 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QTK 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ca775e554074998bbee557d1fd305c6 1 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ca775e554074998bbee557d1fd305c6 1 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ca775e554074998bbee557d1fd305c6 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QTK 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QTK 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QTK 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.117 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c6733d7d54f1d15783ef288c4041610abf7e198ca860d00 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OYR 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c6733d7d54f1d15783ef288c4041610abf7e198ca860d00 2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c6733d7d54f1d15783ef288c4041610abf7e198ca860d00 2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c6733d7d54f1d15783ef288c4041610abf7e198ca860d00 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OYR 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OYR 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.OYR 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f416471c4e3f4a21de018a34ab62d00a65237981db65e6cb 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vxS 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f416471c4e3f4a21de018a34ab62d00a65237981db65e6cb 2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f416471c4e3f4a21de018a34ab62d00a65237981db65e6cb 2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f416471c4e3f4a21de018a34ab62d00a65237981db65e6cb 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vxS 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vxS 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.vxS 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=35f880d9cbfbfee95480044f3ca90525 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LS1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 35f880d9cbfbfee95480044f3ca90525 1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 35f880d9cbfbfee95480044f3ca90525 1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.378 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=35f880d9cbfbfee95480044f3ca90525 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LS1 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LS1 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.LS1 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b0762921458b2af1d62c6fc6d43d4c7e5ff28966d0c04a7c6023b709869c323b 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O5p 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b0762921458b2af1d62c6fc6d43d4c7e5ff28966d0c04a7c6023b709869c323b 3 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b0762921458b2af1d62c6fc6d43d4c7e5ff28966d0c04a7c6023b709869c323b 3 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b0762921458b2af1d62c6fc6d43d4c7e5ff28966d0c04a7c6023b709869c323b 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:47.379 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O5p 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O5p 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.O5p 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2025056 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2025056 ']' 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2025395 /var/tmp/host.sock 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2025395 ']' 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.641 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DXX 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DXX 00:17:47.942 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DXX 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.U1Q ]] 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U1Q 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U1Q 00:17:48.203 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U1Q 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QTK 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QTK 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QTK 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.OYR ]] 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYR 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYR 00:17:48.462 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYR 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vxS 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vxS 00:17:48.722 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vxS 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.LS1 ]] 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LS1 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LS1 00:17:48.981 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LS1 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.O5p 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.O5p 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.O5p 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.242 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.503 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.764 00:17:49.764 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.764 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.764 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.025 { 00:17:50.025 "cntlid": 1, 00:17:50.025 "qid": 0, 00:17:50.025 "state": "enabled", 00:17:50.025 "thread": "nvmf_tgt_poll_group_000", 00:17:50.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.025 "listen_address": { 00:17:50.025 "trtype": "TCP", 00:17:50.025 "adrfam": "IPv4", 00:17:50.025 "traddr": "10.0.0.2", 00:17:50.025 "trsvcid": "4420" 00:17:50.025 }, 00:17:50.025 "peer_address": { 00:17:50.025 "trtype": "TCP", 00:17:50.025 "adrfam": "IPv4", 00:17:50.025 "traddr": "10.0.0.1", 00:17:50.025 "trsvcid": "37402" 00:17:50.025 }, 00:17:50.025 "auth": { 00:17:50.025 "state": "completed", 00:17:50.025 "digest": "sha256", 00:17:50.025 "dhgroup": "null" 00:17:50.025 } 00:17:50.025 } 00:17:50.025 ]' 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.025 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.285 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:17:50.285 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:17:50.855 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.115 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.116 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.116 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.116 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.116 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.376 00:17:51.376 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.376 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.376 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.636 { 00:17:51.636 "cntlid": 3, 00:17:51.636 "qid": 0, 00:17:51.636 "state": "enabled", 00:17:51.636 "thread": "nvmf_tgt_poll_group_000", 00:17:51.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.636 "listen_address": { 00:17:51.636 "trtype": "TCP", 00:17:51.636 "adrfam": "IPv4", 00:17:51.636 "traddr": "10.0.0.2", 00:17:51.636 "trsvcid": "4420" 00:17:51.636 }, 00:17:51.636 "peer_address": { 00:17:51.636 "trtype": "TCP", 00:17:51.636 "adrfam": "IPv4", 00:17:51.636 "traddr": "10.0.0.1", 00:17:51.636 "trsvcid": "37426" 00:17:51.636 }, 00:17:51.636 "auth": { 00:17:51.636 "state": "completed", 00:17:51.636 "digest": "sha256", 00:17:51.636 "dhgroup": "null" 00:17:51.636 } 00:17:51.636 } 00:17:51.636 ]' 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.636 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.897 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.897 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.897 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.897 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:17:51.897 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.837 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.837 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.838 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.098 00:17:53.098 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.098 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.098 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.358 { 00:17:53.358 "cntlid": 5, 00:17:53.358 "qid": 0, 00:17:53.358 "state": "enabled", 00:17:53.358 "thread": "nvmf_tgt_poll_group_000", 00:17:53.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.358 "listen_address": { 00:17:53.358 "trtype": "TCP", 00:17:53.358 "adrfam": "IPv4", 00:17:53.358 "traddr": "10.0.0.2", 00:17:53.358 "trsvcid": "4420" 00:17:53.358 }, 00:17:53.358 "peer_address": { 00:17:53.358 "trtype": "TCP", 00:17:53.358 "adrfam": "IPv4", 00:17:53.358 "traddr": "10.0.0.1", 00:17:53.358 "trsvcid": "60596" 00:17:53.358 }, 00:17:53.358 "auth": { 00:17:53.358 "state": "completed", 00:17:53.358 "digest": "sha256", 00:17:53.358 "dhgroup": "null" 00:17:53.358 } 00:17:53.358 } 00:17:53.358 ]' 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.358 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.618 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:17:53.618 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.188 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.449 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.708 00:17:54.708 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.709 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.709 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.969 { 00:17:54.969 "cntlid": 7, 00:17:54.969 "qid": 0, 00:17:54.969 "state": "enabled", 00:17:54.969 "thread": "nvmf_tgt_poll_group_000", 00:17:54.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.969 "listen_address": { 00:17:54.969 "trtype": "TCP", 00:17:54.969 "adrfam": "IPv4", 00:17:54.969 "traddr": "10.0.0.2", 00:17:54.969 "trsvcid": "4420" 00:17:54.969 }, 00:17:54.969 "peer_address": { 00:17:54.969 "trtype": "TCP", 00:17:54.969 "adrfam": "IPv4", 00:17:54.969 "traddr": "10.0.0.1", 00:17:54.969 "trsvcid": "60632" 00:17:54.969 }, 00:17:54.969 "auth": { 00:17:54.969 "state": "completed", 00:17:54.969 "digest": "sha256", 00:17:54.969 "dhgroup": "null" 00:17:54.969 } 00:17:54.969 } 00:17:54.969 ]' 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.969 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.229 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:17:55.229 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.799 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.059 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.319 00:17:56.319 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.319 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.319 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.579 { 00:17:56.579 "cntlid": 9, 00:17:56.579 "qid": 0, 00:17:56.579 "state": "enabled", 00:17:56.579 "thread": "nvmf_tgt_poll_group_000", 00:17:56.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.579 "listen_address": { 00:17:56.579 "trtype": "TCP", 00:17:56.579 "adrfam": "IPv4", 00:17:56.579 "traddr": "10.0.0.2", 00:17:56.579 "trsvcid": "4420" 00:17:56.579 }, 00:17:56.579 "peer_address": { 00:17:56.579 "trtype": "TCP", 00:17:56.579 "adrfam": "IPv4", 00:17:56.579 "traddr": "10.0.0.1", 00:17:56.579 "trsvcid": "60662" 00:17:56.579 }, 00:17:56.579 "auth": { 00:17:56.579 "state": "completed", 00:17:56.579 "digest": "sha256", 00:17:56.579 "dhgroup": "ffdhe2048" 00:17:56.579 } 00:17:56.579 } 00:17:56.579 ]' 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.579 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.840 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:17:56.840 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.410 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.670 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.671 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.930 00:17:57.930 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.930 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.930 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.189 { 00:17:58.189 "cntlid": 11, 00:17:58.189 "qid": 0, 00:17:58.189 "state": "enabled", 00:17:58.189 "thread": "nvmf_tgt_poll_group_000", 00:17:58.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.189 "listen_address": { 00:17:58.189 "trtype": "TCP", 00:17:58.189 "adrfam": "IPv4", 00:17:58.189 "traddr": "10.0.0.2", 00:17:58.189 "trsvcid": "4420" 00:17:58.189 }, 00:17:58.189 "peer_address": { 00:17:58.189 "trtype": "TCP", 00:17:58.189 "adrfam": "IPv4", 00:17:58.189 "traddr": "10.0.0.1", 00:17:58.189 "trsvcid": "60682" 00:17:58.189 }, 00:17:58.189 "auth": { 00:17:58.189 "state": "completed", 00:17:58.189 "digest": "sha256", 00:17:58.189 "dhgroup": "ffdhe2048" 00:17:58.189 } 00:17:58.189 } 00:17:58.189 ]' 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.189 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.450 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:17:58.450 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:17:59.020 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.280 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.541 00:17:59.541 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.541 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.541 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.801 { 00:17:59.801 "cntlid": 13, 00:17:59.801 "qid": 0, 00:17:59.801 "state": "enabled", 00:17:59.801 "thread": "nvmf_tgt_poll_group_000", 00:17:59.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.801 "listen_address": { 00:17:59.801 "trtype": "TCP", 00:17:59.801 "adrfam": "IPv4", 00:17:59.801 "traddr": "10.0.0.2", 00:17:59.801 "trsvcid": "4420" 00:17:59.801 }, 00:17:59.801 "peer_address": { 00:17:59.801 "trtype": "TCP", 00:17:59.801 "adrfam": "IPv4", 00:17:59.801 "traddr": "10.0.0.1", 00:17:59.801 "trsvcid": "60710" 00:17:59.801 }, 00:17:59.801 "auth": { 00:17:59.801 "state": "completed", 00:17:59.801 "digest": "sha256", 00:17:59.801 "dhgroup": "ffdhe2048" 00:17:59.801 } 00:17:59.801 } 00:17:59.801 ]' 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.801 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.061 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:00.061 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:00.631 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.890 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.151 00:18:01.151 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.151 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.151 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.411 { 00:18:01.411 "cntlid": 15, 00:18:01.411 "qid": 0, 00:18:01.411 "state": "enabled", 00:18:01.411 "thread": "nvmf_tgt_poll_group_000", 00:18:01.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.411 "listen_address": { 00:18:01.411 "trtype": "TCP", 00:18:01.411 "adrfam": "IPv4", 00:18:01.411 "traddr": "10.0.0.2", 00:18:01.411 "trsvcid": "4420" 00:18:01.411 }, 00:18:01.411 "peer_address": { 00:18:01.411 "trtype": "TCP", 00:18:01.411 "adrfam": "IPv4", 00:18:01.411 "traddr": "10.0.0.1", 00:18:01.411 "trsvcid": "60748" 00:18:01.411 }, 00:18:01.411 "auth": { 00:18:01.411 "state": "completed", 00:18:01.411 "digest": "sha256", 00:18:01.411 "dhgroup": "ffdhe2048" 00:18:01.411 } 00:18:01.411 } 00:18:01.411 ]' 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.411 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.412 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.412 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.412 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.412 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.412 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.672 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:01.672 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.243 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.503 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.762 00:18:02.762 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.762 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.762 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.022 { 00:18:03.022 "cntlid": 17, 00:18:03.022 "qid": 0, 00:18:03.022 "state": "enabled", 00:18:03.022 "thread": "nvmf_tgt_poll_group_000", 00:18:03.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.022 "listen_address": { 00:18:03.022 "trtype": "TCP", 00:18:03.022 "adrfam": "IPv4", 00:18:03.022 "traddr": "10.0.0.2", 00:18:03.022 "trsvcid": "4420" 00:18:03.022 }, 00:18:03.022 "peer_address": { 00:18:03.022 "trtype": "TCP", 00:18:03.022 "adrfam": "IPv4", 00:18:03.022 "traddr": "10.0.0.1", 00:18:03.022 "trsvcid": "52202" 00:18:03.022 }, 00:18:03.022 "auth": { 00:18:03.022 "state": "completed", 00:18:03.022 "digest": "sha256", 00:18:03.022 "dhgroup": "ffdhe3072" 00:18:03.022 } 00:18:03.022 } 00:18:03.022 ]' 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.022 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.283 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:03.283 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.853 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.114 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.374 00:18:04.374 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.374 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.374 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.635 { 00:18:04.635 "cntlid": 19, 00:18:04.635 "qid": 0, 00:18:04.635 "state": "enabled", 00:18:04.635 "thread": "nvmf_tgt_poll_group_000", 00:18:04.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.635 "listen_address": { 00:18:04.635 "trtype": "TCP", 00:18:04.635 "adrfam": "IPv4", 00:18:04.635 "traddr": "10.0.0.2", 00:18:04.635 "trsvcid": "4420" 00:18:04.635 }, 00:18:04.635 "peer_address": { 00:18:04.635 "trtype": "TCP", 00:18:04.635 "adrfam": "IPv4", 00:18:04.635 "traddr": "10.0.0.1", 00:18:04.635 "trsvcid": "52230" 00:18:04.635 }, 00:18:04.635 "auth": { 00:18:04.635 "state": "completed", 00:18:04.635 "digest": "sha256", 00:18:04.635 "dhgroup": "ffdhe3072" 00:18:04.635 } 00:18:04.635 } 00:18:04.635 ]' 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.635 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.635 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.635 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.896 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.896 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.896 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.896 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:04.896 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.873 10:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.873 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.158 00:18:06.158 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.158 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.158 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.419 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.419 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.420 { 00:18:06.420 "cntlid": 21, 00:18:06.420 "qid": 0, 00:18:06.420 "state": "enabled", 00:18:06.420 "thread": "nvmf_tgt_poll_group_000", 00:18:06.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.420 "listen_address": { 00:18:06.420 "trtype": "TCP", 00:18:06.420 "adrfam": "IPv4", 00:18:06.420 "traddr": "10.0.0.2", 00:18:06.420 "trsvcid": "4420" 00:18:06.420 }, 00:18:06.420 "peer_address": { 00:18:06.420 "trtype": "TCP", 00:18:06.420 "adrfam": "IPv4", 00:18:06.420 "traddr": "10.0.0.1", 00:18:06.420 "trsvcid": "52262" 00:18:06.420 }, 00:18:06.420 "auth": { 00:18:06.420 "state": "completed", 00:18:06.420 "digest": "sha256", 00:18:06.420 "dhgroup": "ffdhe3072" 00:18:06.420 } 00:18:06.420 } 00:18:06.420 ]' 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.420 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.681 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:06.681 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:07.251 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.511 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.771 00:18:07.771 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.771 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.771 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.032 { 00:18:08.032 "cntlid": 23, 00:18:08.032 "qid": 0, 00:18:08.032 "state": "enabled", 00:18:08.032 "thread": "nvmf_tgt_poll_group_000", 00:18:08.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.032 "listen_address": { 00:18:08.032 "trtype": "TCP", 00:18:08.032 "adrfam": "IPv4", 00:18:08.032 "traddr": "10.0.0.2", 00:18:08.032 "trsvcid": "4420" 00:18:08.032 }, 00:18:08.032 "peer_address": { 00:18:08.032 "trtype": "TCP", 00:18:08.032 "adrfam": "IPv4", 00:18:08.032 "traddr": "10.0.0.1", 00:18:08.032 "trsvcid": "52290" 00:18:08.032 }, 00:18:08.032 "auth": { 00:18:08.032 "state": "completed", 00:18:08.032 "digest": "sha256", 00:18:08.032 "dhgroup": "ffdhe3072" 00:18:08.032 } 00:18:08.032 } 00:18:08.032 ]' 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.032 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.292 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:08.292 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.862 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.123 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.383 00:18:09.383 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.383 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.383 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.644 { 00:18:09.644 "cntlid": 25, 00:18:09.644 "qid": 0, 00:18:09.644 "state": "enabled", 00:18:09.644 "thread": "nvmf_tgt_poll_group_000", 00:18:09.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.644 "listen_address": { 00:18:09.644 "trtype": "TCP", 00:18:09.644 "adrfam": "IPv4", 00:18:09.644 "traddr": "10.0.0.2", 00:18:09.644 "trsvcid": "4420" 00:18:09.644 }, 00:18:09.644 "peer_address": { 00:18:09.644 "trtype": "TCP", 00:18:09.644 "adrfam": "IPv4", 00:18:09.644 "traddr": "10.0.0.1", 00:18:09.644 "trsvcid": "52306" 00:18:09.644 }, 00:18:09.644 "auth": { 00:18:09.644 "state": "completed", 00:18:09.644 "digest": "sha256", 00:18:09.644 "dhgroup": "ffdhe4096" 00:18:09.644 } 00:18:09.644 } 00:18:09.644 ]' 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.644 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.904 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:09.904 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.474 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.734 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.994 00:18:10.994 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.994 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.994 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.255 { 00:18:11.255 "cntlid": 27, 00:18:11.255 "qid": 0, 00:18:11.255 "state": "enabled", 00:18:11.255 "thread": "nvmf_tgt_poll_group_000", 00:18:11.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.255 "listen_address": { 00:18:11.255 "trtype": "TCP", 00:18:11.255 "adrfam": "IPv4", 00:18:11.255 "traddr": "10.0.0.2", 00:18:11.255 "trsvcid": "4420" 00:18:11.255 }, 00:18:11.255 "peer_address": { 00:18:11.255 "trtype": "TCP", 00:18:11.255 "adrfam": "IPv4", 00:18:11.255 "traddr": "10.0.0.1", 00:18:11.255 "trsvcid": "52336" 00:18:11.255 }, 00:18:11.255 "auth": { 00:18:11.255 "state": "completed", 00:18:11.255 "digest": "sha256", 00:18:11.255 "dhgroup": "ffdhe4096" 00:18:11.255 } 00:18:11.255 } 00:18:11.255 ]' 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.255 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.515 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:11.515 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:12.086 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.087 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.087 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.087 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.346 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.606 00:18:12.606 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.606 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.606 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.867 { 00:18:12.867 "cntlid": 29, 00:18:12.867 "qid": 0, 00:18:12.867 "state": "enabled", 00:18:12.867 "thread": "nvmf_tgt_poll_group_000", 00:18:12.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.867 "listen_address": { 00:18:12.867 "trtype": "TCP", 00:18:12.867 "adrfam": "IPv4", 00:18:12.867 "traddr": "10.0.0.2", 00:18:12.867 "trsvcid": "4420" 00:18:12.867 }, 00:18:12.867 "peer_address": { 00:18:12.867 "trtype": "TCP", 00:18:12.867 "adrfam": "IPv4", 00:18:12.867 "traddr": "10.0.0.1", 00:18:12.867 "trsvcid": "47682" 00:18:12.867 }, 00:18:12.867 "auth": { 00:18:12.867 "state": "completed", 00:18:12.867 "digest": "sha256", 00:18:12.867 "dhgroup": "ffdhe4096" 00:18:12.867 } 00:18:12.867 } 00:18:12.867 ]' 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.867 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.128 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.128 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.128 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.128 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:13.128 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.070 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.331 00:18:14.331 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.331 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.331 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.591 { 00:18:14.591 "cntlid": 31, 00:18:14.591 "qid": 0, 00:18:14.591 "state": "enabled", 00:18:14.591 "thread": "nvmf_tgt_poll_group_000", 00:18:14.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.591 "listen_address": { 00:18:14.591 "trtype": "TCP", 00:18:14.591 "adrfam": "IPv4", 00:18:14.591 "traddr": "10.0.0.2", 00:18:14.591 "trsvcid": "4420" 00:18:14.591 }, 00:18:14.591 "peer_address": { 00:18:14.591 "trtype": "TCP", 00:18:14.591 "adrfam": "IPv4", 00:18:14.591 "traddr": "10.0.0.1", 00:18:14.591 "trsvcid": "47718" 00:18:14.591 }, 00:18:14.591 "auth": { 00:18:14.591 "state": "completed", 00:18:14.591 "digest": "sha256", 00:18:14.591 "dhgroup": "ffdhe4096" 00:18:14.591 } 00:18:14.591 } 00:18:14.591 ]' 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.591 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.592 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.592 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.592 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.592 10:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.852 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:14.852 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.422 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.681 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.941 00:18:15.941 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.941 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.941 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.201 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.201 { 00:18:16.201 "cntlid": 33, 00:18:16.201 "qid": 0, 00:18:16.201 "state": "enabled", 00:18:16.201 "thread": "nvmf_tgt_poll_group_000", 00:18:16.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.201 "listen_address": { 00:18:16.201 "trtype": "TCP", 00:18:16.201 "adrfam": "IPv4", 00:18:16.201 "traddr": "10.0.0.2", 00:18:16.201 "trsvcid": "4420" 00:18:16.201 }, 00:18:16.201 "peer_address": { 00:18:16.201 "trtype": "TCP", 00:18:16.201 "adrfam": "IPv4", 00:18:16.201 "traddr": "10.0.0.1", 00:18:16.201 "trsvcid": "47750" 00:18:16.202 }, 00:18:16.202 "auth": { 00:18:16.202 "state": "completed", 00:18:16.202 "digest": "sha256", 00:18:16.202 "dhgroup": "ffdhe6144" 00:18:16.202 } 00:18:16.202 } 00:18:16.202 ]' 00:18:16.202 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.202 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.202 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.202 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.202 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.473 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.473 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.473 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.473 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:16.473 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.411 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.672 00:18:17.672 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.672 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.672 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.932 { 00:18:17.932 "cntlid": 35, 00:18:17.932 "qid": 0, 00:18:17.932 "state": "enabled", 00:18:17.932 "thread": "nvmf_tgt_poll_group_000", 00:18:17.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.932 "listen_address": { 00:18:17.932 "trtype": "TCP", 00:18:17.932 "adrfam": "IPv4", 00:18:17.932 "traddr": "10.0.0.2", 00:18:17.932 "trsvcid": "4420" 00:18:17.932 }, 00:18:17.932 "peer_address": { 00:18:17.932 "trtype": "TCP", 00:18:17.932 "adrfam": "IPv4", 00:18:17.932 "traddr": "10.0.0.1", 00:18:17.932 "trsvcid": "47782" 00:18:17.932 }, 00:18:17.932 "auth": { 00:18:17.932 "state": "completed", 00:18:17.932 "digest": "sha256", 00:18:17.932 "dhgroup": "ffdhe6144" 00:18:17.932 } 00:18:17.932 } 00:18:17.932 ]' 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.932 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.193 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:18.193 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:18.762 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.021 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.022 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.591 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.591 { 00:18:19.591 "cntlid": 37, 00:18:19.591 "qid": 0, 00:18:19.591 "state": "enabled", 00:18:19.591 "thread": "nvmf_tgt_poll_group_000", 00:18:19.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.591 "listen_address": { 00:18:19.591 "trtype": "TCP", 00:18:19.591 "adrfam": "IPv4", 00:18:19.591 "traddr": "10.0.0.2", 00:18:19.591 "trsvcid": "4420" 00:18:19.591 }, 00:18:19.591 "peer_address": { 00:18:19.591 "trtype": "TCP", 00:18:19.591 "adrfam": "IPv4", 00:18:19.591 "traddr": "10.0.0.1", 00:18:19.591 "trsvcid": "47812" 00:18:19.591 }, 00:18:19.591 "auth": { 00:18:19.591 "state": "completed", 00:18:19.591 "digest": "sha256", 00:18:19.591 "dhgroup": "ffdhe6144" 00:18:19.591 } 00:18:19.591 } 00:18:19.591 ]' 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.591 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.851 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.851 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.851 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.851 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.851 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.852 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:19.852 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.790 10:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.790 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.050 00:18:21.050 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.050 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.050 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.311 { 00:18:21.311 "cntlid": 39, 00:18:21.311 "qid": 0, 00:18:21.311 "state": "enabled", 00:18:21.311 "thread": "nvmf_tgt_poll_group_000", 00:18:21.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.311 "listen_address": { 00:18:21.311 "trtype": "TCP", 00:18:21.311 "adrfam": "IPv4", 00:18:21.311 "traddr": "10.0.0.2", 00:18:21.311 "trsvcid": "4420" 00:18:21.311 }, 00:18:21.311 "peer_address": { 00:18:21.311 "trtype": "TCP", 00:18:21.311 "adrfam": "IPv4", 00:18:21.311 "traddr": "10.0.0.1", 00:18:21.311 "trsvcid": "47848" 00:18:21.311 }, 00:18:21.311 "auth": { 00:18:21.311 "state": "completed", 00:18:21.311 "digest": "sha256", 00:18:21.311 "dhgroup": "ffdhe6144" 00:18:21.311 } 00:18:21.311 } 00:18:21.311 ]' 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.311 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:21.571 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.509 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.510 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.080 00:18:23.080 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.080 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.080 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.340 { 00:18:23.340 "cntlid": 41, 00:18:23.340 "qid": 0, 00:18:23.340 "state": "enabled", 00:18:23.340 "thread": "nvmf_tgt_poll_group_000", 00:18:23.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.340 "listen_address": { 00:18:23.340 "trtype": "TCP", 00:18:23.340 "adrfam": "IPv4", 00:18:23.340 "traddr": "10.0.0.2", 00:18:23.340 "trsvcid": "4420" 00:18:23.340 }, 00:18:23.340 "peer_address": { 00:18:23.340 "trtype": "TCP", 00:18:23.340 "adrfam": "IPv4", 00:18:23.340 "traddr": "10.0.0.1", 00:18:23.340 "trsvcid": "34374" 00:18:23.340 }, 00:18:23.340 "auth": { 00:18:23.340 "state": "completed", 00:18:23.340 "digest": "sha256", 00:18:23.340 "dhgroup": "ffdhe8192" 00:18:23.340 } 00:18:23.340 } 00:18:23.340 ]' 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.340 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.599 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:23.599 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.168 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.428 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.429 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.999 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.999 { 00:18:24.999 "cntlid": 43, 00:18:24.999 "qid": 0, 00:18:24.999 "state": "enabled", 00:18:24.999 "thread": "nvmf_tgt_poll_group_000", 00:18:24.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.999 "listen_address": { 00:18:24.999 "trtype": "TCP", 00:18:24.999 "adrfam": "IPv4", 00:18:24.999 "traddr": "10.0.0.2", 00:18:24.999 "trsvcid": "4420" 00:18:24.999 }, 00:18:24.999 "peer_address": { 00:18:24.999 "trtype": "TCP", 00:18:24.999 "adrfam": "IPv4", 00:18:24.999 "traddr": "10.0.0.1", 00:18:24.999 "trsvcid": "34406" 00:18:24.999 }, 00:18:24.999 "auth": { 00:18:24.999 "state": "completed", 00:18:24.999 "digest": "sha256", 00:18:24.999 "dhgroup": "ffdhe8192" 00:18:24.999 } 00:18:24.999 } 00:18:24.999 ]' 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.999 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.258 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.518 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:25.518 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.087 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.347 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.606 00:18:26.866 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.866 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.866 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.866 { 00:18:26.866 "cntlid": 45, 00:18:26.866 "qid": 0, 00:18:26.866 "state": "enabled", 00:18:26.866 "thread": "nvmf_tgt_poll_group_000", 00:18:26.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.866 "listen_address": { 00:18:26.866 "trtype": "TCP", 00:18:26.866 "adrfam": "IPv4", 00:18:26.866 "traddr": "10.0.0.2", 00:18:26.866 "trsvcid": "4420" 00:18:26.866 }, 00:18:26.866 "peer_address": { 00:18:26.866 "trtype": "TCP", 00:18:26.866 "adrfam": "IPv4", 00:18:26.866 "traddr": "10.0.0.1", 00:18:26.866 "trsvcid": "34430" 00:18:26.866 }, 00:18:26.866 "auth": { 00:18:26.866 "state": "completed", 00:18:26.866 "digest": "sha256", 00:18:26.866 "dhgroup": "ffdhe8192" 00:18:26.866 } 00:18:26.866 } 00:18:26.866 ]' 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.866 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:27.125 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.065 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.634 00:18:28.634 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.634 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.634 10:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.894 { 00:18:28.894 "cntlid": 47, 00:18:28.894 "qid": 0, 00:18:28.894 "state": "enabled", 00:18:28.894 "thread": "nvmf_tgt_poll_group_000", 00:18:28.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.894 "listen_address": { 00:18:28.894 "trtype": "TCP", 00:18:28.894 "adrfam": "IPv4", 00:18:28.894 "traddr": "10.0.0.2", 00:18:28.894 "trsvcid": "4420" 00:18:28.894 }, 00:18:28.894 "peer_address": { 00:18:28.894 "trtype": "TCP", 00:18:28.894 "adrfam": "IPv4", 00:18:28.894 "traddr": "10.0.0.1", 00:18:28.894 "trsvcid": "34448" 00:18:28.894 }, 00:18:28.894 "auth": { 00:18:28.894 "state": "completed", 00:18:28.894 "digest": "sha256", 00:18:28.894 "dhgroup": "ffdhe8192" 00:18:28.894 } 00:18:28.894 } 00:18:28.894 ]' 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.894 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.154 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:29.154 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.724 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.984 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.243 00:18:30.243 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.243 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.243 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.503 { 00:18:30.503 "cntlid": 49, 00:18:30.503 "qid": 0, 00:18:30.503 "state": "enabled", 00:18:30.503 "thread": "nvmf_tgt_poll_group_000", 00:18:30.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.503 "listen_address": { 00:18:30.503 "trtype": "TCP", 00:18:30.503 "adrfam": "IPv4", 00:18:30.503 "traddr": "10.0.0.2", 00:18:30.503 "trsvcid": "4420" 00:18:30.503 }, 00:18:30.503 "peer_address": { 00:18:30.503 "trtype": "TCP", 00:18:30.503 "adrfam": "IPv4", 00:18:30.503 "traddr": "10.0.0.1", 00:18:30.503 "trsvcid": "34474" 00:18:30.503 }, 00:18:30.503 "auth": { 00:18:30.503 "state": "completed", 00:18:30.503 "digest": "sha384", 00:18:30.503 "dhgroup": "null" 00:18:30.503 } 00:18:30.503 } 00:18:30.503 ]' 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.503 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.764 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:30.764 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.334 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.595 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.855 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.855 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.115 { 00:18:32.115 "cntlid": 51, 00:18:32.115 "qid": 0, 00:18:32.115 "state": "enabled", 00:18:32.115 "thread": "nvmf_tgt_poll_group_000", 00:18:32.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.115 "listen_address": { 00:18:32.115 "trtype": "TCP", 00:18:32.115 "adrfam": "IPv4", 00:18:32.115 "traddr": "10.0.0.2", 00:18:32.115 "trsvcid": "4420" 00:18:32.115 }, 00:18:32.115 "peer_address": { 00:18:32.115 "trtype": "TCP", 00:18:32.115 "adrfam": "IPv4", 00:18:32.115 "traddr": "10.0.0.1", 00:18:32.115 "trsvcid": "34506" 00:18:32.115 }, 00:18:32.115 "auth": { 00:18:32.115 "state": "completed", 00:18:32.115 "digest": "sha384", 00:18:32.115 "dhgroup": "null" 00:18:32.115 } 00:18:32.115 } 00:18:32.115 ]' 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.115 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.375 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:32.375 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.946 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.206 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.466 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.466 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.726 { 00:18:33.726 "cntlid": 53, 00:18:33.726 "qid": 0, 00:18:33.726 "state": "enabled", 00:18:33.726 "thread": "nvmf_tgt_poll_group_000", 00:18:33.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.726 "listen_address": { 00:18:33.726 "trtype": "TCP", 00:18:33.726 "adrfam": "IPv4", 00:18:33.726 "traddr": "10.0.0.2", 00:18:33.726 "trsvcid": "4420" 00:18:33.726 }, 00:18:33.726 "peer_address": { 00:18:33.726 "trtype": "TCP", 00:18:33.726 "adrfam": "IPv4", 00:18:33.726 "traddr": "10.0.0.1", 00:18:33.726 "trsvcid": "47794" 00:18:33.726 }, 00:18:33.726 "auth": { 00:18:33.726 "state": "completed", 00:18:33.726 "digest": "sha384", 00:18:33.726 "dhgroup": "null" 00:18:33.726 } 00:18:33.726 } 00:18:33.726 ]' 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.726 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.986 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:33.986 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.556 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.817 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.817 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.077 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.077 { 00:18:35.077 "cntlid": 55, 00:18:35.077 "qid": 0, 00:18:35.077 "state": "enabled", 00:18:35.077 "thread": "nvmf_tgt_poll_group_000", 00:18:35.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.077 "listen_address": { 00:18:35.077 "trtype": "TCP", 00:18:35.077 "adrfam": "IPv4", 00:18:35.077 "traddr": "10.0.0.2", 00:18:35.077 "trsvcid": "4420" 00:18:35.077 }, 00:18:35.077 "peer_address": { 00:18:35.077 "trtype": "TCP", 00:18:35.077 "adrfam": "IPv4", 00:18:35.077 "traddr": "10.0.0.1", 00:18:35.077 "trsvcid": "47816" 00:18:35.077 }, 00:18:35.077 "auth": { 00:18:35.077 "state": "completed", 00:18:35.077 "digest": "sha384", 00:18:35.077 "dhgroup": "null" 00:18:35.077 } 00:18:35.077 } 00:18:35.077 ]' 00:18:35.077 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.337 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.337 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.337 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:35.338 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.338 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.338 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.338 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.597 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:35.597 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.165 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.425 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.425 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.684 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.684 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.684 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.684 { 00:18:36.684 "cntlid": 57, 00:18:36.684 "qid": 0, 00:18:36.684 "state": "enabled", 00:18:36.684 "thread": "nvmf_tgt_poll_group_000", 00:18:36.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.684 "listen_address": { 00:18:36.684 "trtype": "TCP", 00:18:36.684 "adrfam": "IPv4", 00:18:36.684 "traddr": "10.0.0.2", 00:18:36.684 "trsvcid": "4420" 00:18:36.684 }, 00:18:36.684 "peer_address": { 00:18:36.684 "trtype": "TCP", 00:18:36.684 "adrfam": "IPv4", 00:18:36.684 "traddr": "10.0.0.1", 00:18:36.684 "trsvcid": "47838" 00:18:36.684 }, 00:18:36.684 "auth": { 00:18:36.684 "state": "completed", 00:18:36.684 "digest": "sha384", 00:18:36.684 "dhgroup": "ffdhe2048" 00:18:36.684 } 00:18:36.684 } 00:18:36.684 ]' 00:18:36.684 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.944 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.945 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.204 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:37.204 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.771 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.030 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.030 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.290 { 00:18:38.290 "cntlid": 59, 00:18:38.290 "qid": 0, 00:18:38.290 "state": "enabled", 00:18:38.290 "thread": "nvmf_tgt_poll_group_000", 00:18:38.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.290 "listen_address": { 00:18:38.290 "trtype": "TCP", 00:18:38.290 "adrfam": "IPv4", 00:18:38.290 "traddr": "10.0.0.2", 00:18:38.290 "trsvcid": "4420" 00:18:38.290 }, 00:18:38.290 "peer_address": { 00:18:38.290 "trtype": "TCP", 00:18:38.290 "adrfam": "IPv4", 00:18:38.290 "traddr": "10.0.0.1", 00:18:38.290 "trsvcid": "47854" 00:18:38.290 }, 00:18:38.290 "auth": { 00:18:38.290 "state": "completed", 00:18:38.290 "digest": "sha384", 00:18:38.290 "dhgroup": "ffdhe2048" 00:18:38.290 } 00:18:38.290 } 00:18:38.290 ]' 00:18:38.290 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.550 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.809 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:38.809 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:39.376 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.377 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.636 10:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.636 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.897 { 00:18:39.897 "cntlid": 61, 00:18:39.897 "qid": 0, 00:18:39.897 "state": "enabled", 00:18:39.897 "thread": "nvmf_tgt_poll_group_000", 00:18:39.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.897 "listen_address": { 00:18:39.897 "trtype": "TCP", 00:18:39.897 "adrfam": "IPv4", 00:18:39.897 "traddr": "10.0.0.2", 00:18:39.897 "trsvcid": "4420" 00:18:39.897 }, 00:18:39.897 "peer_address": { 00:18:39.897 "trtype": "TCP", 00:18:39.897 "adrfam": "IPv4", 00:18:39.897 "traddr": "10.0.0.1", 00:18:39.897 "trsvcid": "47876" 00:18:39.897 }, 00:18:39.897 "auth": { 00:18:39.897 "state": "completed", 00:18:39.897 "digest": "sha384", 00:18:39.897 "dhgroup": "ffdhe2048" 00:18:39.897 } 00:18:39.897 } 00:18:39.897 ]' 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.897 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.157 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.157 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.157 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.157 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.157 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.416 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:40.416 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.985 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.243 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.503 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.503 { 00:18:41.503 "cntlid": 63, 00:18:41.503 "qid": 0, 00:18:41.503 "state": "enabled", 00:18:41.503 "thread": "nvmf_tgt_poll_group_000", 00:18:41.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.503 "listen_address": { 00:18:41.503 "trtype": "TCP", 00:18:41.503 "adrfam": "IPv4", 00:18:41.503 "traddr": "10.0.0.2", 00:18:41.503 "trsvcid": "4420" 00:18:41.503 }, 00:18:41.503 "peer_address": { 00:18:41.503 "trtype": "TCP", 00:18:41.503 "adrfam": "IPv4", 00:18:41.503 "traddr": "10.0.0.1", 00:18:41.503 "trsvcid": "47908" 00:18:41.503 }, 00:18:41.503 "auth": { 00:18:41.503 "state": "completed", 00:18:41.503 "digest": "sha384", 00:18:41.503 "dhgroup": "ffdhe2048" 00:18:41.503 } 00:18:41.503 } 00:18:41.503 ]' 00:18:41.503 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.763 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.023 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:42.023 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.592 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.852 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.112 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.112 { 00:18:43.112 "cntlid": 65, 00:18:43.112 "qid": 0, 00:18:43.112 "state": "enabled", 00:18:43.112 "thread": "nvmf_tgt_poll_group_000", 00:18:43.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.112 "listen_address": { 00:18:43.112 "trtype": "TCP", 00:18:43.112 "adrfam": "IPv4", 00:18:43.112 "traddr": "10.0.0.2", 00:18:43.112 "trsvcid": "4420" 00:18:43.112 }, 00:18:43.112 "peer_address": { 00:18:43.112 "trtype": "TCP", 00:18:43.112 "adrfam": "IPv4", 00:18:43.112 "traddr": "10.0.0.1", 00:18:43.112 "trsvcid": "42832" 00:18:43.112 }, 00:18:43.112 "auth": { 00:18:43.112 "state": "completed", 00:18:43.112 "digest": "sha384", 00:18:43.112 "dhgroup": "ffdhe3072" 00:18:43.112 } 00:18:43.112 } 00:18:43.112 ]' 00:18:43.112 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.372 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.632 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:43.632 10:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.201 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.519 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.848 00:18:44.848 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.848 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.848 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.848 { 00:18:44.848 "cntlid": 67, 00:18:44.848 "qid": 0, 00:18:44.848 "state": "enabled", 00:18:44.848 "thread": "nvmf_tgt_poll_group_000", 00:18:44.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.848 "listen_address": { 00:18:44.848 "trtype": "TCP", 00:18:44.848 "adrfam": "IPv4", 00:18:44.848 "traddr": "10.0.0.2", 00:18:44.848 "trsvcid": "4420" 00:18:44.848 }, 00:18:44.848 "peer_address": { 00:18:44.848 "trtype": "TCP", 00:18:44.848 "adrfam": "IPv4", 00:18:44.848 "traddr": "10.0.0.1", 00:18:44.848 "trsvcid": "42858" 00:18:44.848 }, 00:18:44.848 "auth": { 00:18:44.848 "state": "completed", 00:18:44.848 "digest": "sha384", 00:18:44.848 "dhgroup": "ffdhe3072" 00:18:44.848 } 00:18:44.848 } 00:18:44.848 ]' 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.848 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.116 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.116 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.116 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.116 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:45.116 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:45.685 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.944 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.944 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.945 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.205 00:18:46.205 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.205 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.205 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.465 { 00:18:46.465 "cntlid": 69, 00:18:46.465 "qid": 0, 00:18:46.465 "state": "enabled", 00:18:46.465 "thread": "nvmf_tgt_poll_group_000", 00:18:46.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.465 "listen_address": { 00:18:46.465 "trtype": "TCP", 00:18:46.465 "adrfam": "IPv4", 00:18:46.465 "traddr": "10.0.0.2", 00:18:46.465 "trsvcid": "4420" 00:18:46.465 }, 00:18:46.465 "peer_address": { 00:18:46.465 "trtype": "TCP", 00:18:46.465 "adrfam": "IPv4", 00:18:46.465 "traddr": "10.0.0.1", 00:18:46.465 "trsvcid": "42890" 00:18:46.465 }, 00:18:46.465 "auth": { 00:18:46.465 "state": "completed", 00:18:46.465 "digest": "sha384", 00:18:46.465 "dhgroup": "ffdhe3072" 00:18:46.465 } 00:18:46.465 } 00:18:46.465 ]' 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.465 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.725 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.725 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.725 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.725 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:46.725 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.664 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.925 00:18:47.925 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.925 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.925 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.185 { 00:18:48.185 "cntlid": 71, 00:18:48.185 "qid": 0, 00:18:48.185 "state": "enabled", 00:18:48.185 "thread": "nvmf_tgt_poll_group_000", 00:18:48.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.185 "listen_address": { 00:18:48.185 "trtype": "TCP", 00:18:48.185 "adrfam": "IPv4", 00:18:48.185 "traddr": "10.0.0.2", 00:18:48.185 "trsvcid": "4420" 00:18:48.185 }, 00:18:48.185 "peer_address": { 00:18:48.185 "trtype": "TCP", 00:18:48.185 "adrfam": "IPv4", 00:18:48.185 "traddr": "10.0.0.1", 00:18:48.185 "trsvcid": "42908" 00:18:48.185 }, 00:18:48.185 "auth": { 00:18:48.185 "state": "completed", 00:18:48.185 "digest": "sha384", 00:18:48.185 "dhgroup": "ffdhe3072" 00:18:48.185 } 00:18:48.185 } 00:18:48.185 ]' 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.185 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.446 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:48.446 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.017 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.279 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.539 00:18:49.539 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.539 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.539 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.799 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.799 { 00:18:49.799 "cntlid": 73, 00:18:49.799 "qid": 0, 00:18:49.799 "state": "enabled", 00:18:49.799 "thread": "nvmf_tgt_poll_group_000", 00:18:49.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.799 "listen_address": { 00:18:49.799 "trtype": "TCP", 00:18:49.799 "adrfam": "IPv4", 00:18:49.799 "traddr": "10.0.0.2", 00:18:49.799 "trsvcid": "4420" 00:18:49.800 }, 00:18:49.800 "peer_address": { 00:18:49.800 "trtype": "TCP", 00:18:49.800 "adrfam": "IPv4", 00:18:49.800 "traddr": "10.0.0.1", 00:18:49.800 "trsvcid": "42950" 00:18:49.800 }, 00:18:49.800 "auth": { 00:18:49.800 "state": "completed", 00:18:49.800 "digest": "sha384", 00:18:49.800 "dhgroup": "ffdhe4096" 00:18:49.800 } 00:18:49.800 } 00:18:49.800 ]' 00:18:49.800 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.800 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.061 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:50.061 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.631 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.893 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.154 00:18:51.154 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.154 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.154 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.415 { 00:18:51.415 "cntlid": 75, 00:18:51.415 "qid": 0, 00:18:51.415 "state": "enabled", 00:18:51.415 "thread": "nvmf_tgt_poll_group_000", 00:18:51.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.415 "listen_address": { 00:18:51.415 "trtype": "TCP", 00:18:51.415 "adrfam": "IPv4", 00:18:51.415 "traddr": "10.0.0.2", 00:18:51.415 "trsvcid": "4420" 00:18:51.415 }, 00:18:51.415 "peer_address": { 00:18:51.415 "trtype": "TCP", 00:18:51.415 "adrfam": "IPv4", 00:18:51.415 "traddr": "10.0.0.1", 00:18:51.415 "trsvcid": "42970" 00:18:51.415 }, 00:18:51.415 "auth": { 00:18:51.415 "state": "completed", 00:18:51.415 "digest": "sha384", 00:18:51.415 "dhgroup": "ffdhe4096" 00:18:51.415 } 00:18:51.415 } 00:18:51.415 ]' 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.415 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.676 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:51.676 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:52.247 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.507 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.767 00:18:52.767 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.767 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.767 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.027 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.027 { 00:18:53.027 "cntlid": 77, 00:18:53.027 "qid": 0, 00:18:53.027 "state": "enabled", 00:18:53.027 "thread": "nvmf_tgt_poll_group_000", 00:18:53.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.028 "listen_address": { 00:18:53.028 "trtype": "TCP", 00:18:53.028 "adrfam": "IPv4", 00:18:53.028 "traddr": "10.0.0.2", 00:18:53.028 "trsvcid": "4420" 00:18:53.028 }, 00:18:53.028 "peer_address": { 00:18:53.028 "trtype": "TCP", 00:18:53.028 "adrfam": "IPv4", 00:18:53.028 "traddr": "10.0.0.1", 00:18:53.028 "trsvcid": "41826" 00:18:53.028 }, 00:18:53.028 "auth": { 00:18:53.028 "state": "completed", 00:18:53.028 "digest": "sha384", 00:18:53.028 "dhgroup": "ffdhe4096" 00:18:53.028 } 00:18:53.028 } 00:18:53.028 ]' 00:18:53.028 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.028 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.028 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.028 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.028 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.288 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.288 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.288 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.288 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:53.288 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.231 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.232 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.492 00:18:54.492 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.492 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.492 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.752 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.752 { 00:18:54.752 "cntlid": 79, 00:18:54.752 "qid": 0, 00:18:54.752 "state": "enabled", 00:18:54.752 "thread": "nvmf_tgt_poll_group_000", 00:18:54.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.752 "listen_address": { 00:18:54.752 "trtype": "TCP", 00:18:54.752 "adrfam": "IPv4", 00:18:54.752 "traddr": "10.0.0.2", 00:18:54.752 "trsvcid": "4420" 00:18:54.752 }, 00:18:54.752 "peer_address": { 00:18:54.752 "trtype": "TCP", 00:18:54.752 "adrfam": "IPv4", 00:18:54.752 "traddr": "10.0.0.1", 00:18:54.752 "trsvcid": "41860" 00:18:54.752 }, 00:18:54.752 "auth": { 00:18:54.752 "state": "completed", 00:18:54.752 "digest": "sha384", 00:18:54.752 "dhgroup": "ffdhe4096" 00:18:54.752 } 00:18:54.753 } 00:18:54.753 ]' 00:18:54.753 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.753 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.753 10:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.753 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.753 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.753 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.753 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.753 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.013 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:55.013 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.583 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.843 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.844 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.103 00:18:56.104 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.104 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.104 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.365 { 00:18:56.365 "cntlid": 81, 00:18:56.365 "qid": 0, 00:18:56.365 "state": "enabled", 00:18:56.365 "thread": "nvmf_tgt_poll_group_000", 00:18:56.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.365 "listen_address": { 00:18:56.365 "trtype": "TCP", 00:18:56.365 "adrfam": "IPv4", 00:18:56.365 "traddr": "10.0.0.2", 00:18:56.365 "trsvcid": "4420" 00:18:56.365 }, 00:18:56.365 "peer_address": { 00:18:56.365 "trtype": "TCP", 00:18:56.365 "adrfam": "IPv4", 00:18:56.365 "traddr": "10.0.0.1", 00:18:56.365 "trsvcid": "41890" 00:18:56.365 }, 00:18:56.365 "auth": { 00:18:56.365 "state": "completed", 00:18:56.365 "digest": "sha384", 00:18:56.365 "dhgroup": "ffdhe6144" 00:18:56.365 } 00:18:56.365 } 00:18:56.365 ]' 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.365 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.626 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.626 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.626 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.626 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:56.626 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.569 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.831 00:18:57.831 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.831 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.831 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.091 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.091 { 00:18:58.091 "cntlid": 83, 00:18:58.091 "qid": 0, 00:18:58.091 "state": "enabled", 00:18:58.091 "thread": "nvmf_tgt_poll_group_000", 00:18:58.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.092 "listen_address": { 00:18:58.092 "trtype": "TCP", 00:18:58.092 "adrfam": "IPv4", 00:18:58.092 "traddr": "10.0.0.2", 00:18:58.092 "trsvcid": "4420" 00:18:58.092 }, 00:18:58.092 "peer_address": { 00:18:58.092 "trtype": "TCP", 00:18:58.092 "adrfam": "IPv4", 00:18:58.092 "traddr": "10.0.0.1", 00:18:58.092 "trsvcid": "41918" 00:18:58.092 }, 00:18:58.092 "auth": { 00:18:58.092 "state": "completed", 00:18:58.092 "digest": "sha384", 00:18:58.092 "dhgroup": "ffdhe6144" 00:18:58.092 } 00:18:58.092 } 00:18:58.092 ]' 00:18:58.092 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.092 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.092 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.092 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.092 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.352 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.352 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.352 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.352 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:58.352 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.293 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.554 00:18:59.554 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.554 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.554 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.892 { 00:18:59.892 "cntlid": 85, 00:18:59.892 "qid": 0, 00:18:59.892 "state": "enabled", 00:18:59.892 "thread": "nvmf_tgt_poll_group_000", 00:18:59.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.892 "listen_address": { 00:18:59.892 "trtype": "TCP", 00:18:59.892 "adrfam": "IPv4", 00:18:59.892 "traddr": "10.0.0.2", 00:18:59.892 "trsvcid": "4420" 00:18:59.892 }, 00:18:59.892 "peer_address": { 00:18:59.892 "trtype": "TCP", 00:18:59.892 "adrfam": "IPv4", 00:18:59.892 "traddr": "10.0.0.1", 00:18:59.892 "trsvcid": "41938" 00:18:59.892 }, 00:18:59.892 "auth": { 00:18:59.892 "state": "completed", 00:18:59.892 "digest": "sha384", 00:18:59.892 "dhgroup": "ffdhe6144" 00:18:59.892 } 00:18:59.892 } 00:18:59.892 ]' 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.892 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.151 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:00.151 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:00.723 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.723 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.723 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.723 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.985 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.245 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.506 { 00:19:01.506 "cntlid": 87, 00:19:01.506 "qid": 0, 00:19:01.506 "state": "enabled", 00:19:01.506 "thread": "nvmf_tgt_poll_group_000", 00:19:01.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.506 "listen_address": { 00:19:01.506 "trtype": "TCP", 00:19:01.506 "adrfam": "IPv4", 00:19:01.506 "traddr": "10.0.0.2", 00:19:01.506 "trsvcid": "4420" 00:19:01.506 }, 00:19:01.506 "peer_address": { 00:19:01.506 "trtype": "TCP", 00:19:01.506 "adrfam": "IPv4", 00:19:01.506 "traddr": "10.0.0.1", 00:19:01.506 "trsvcid": "41968" 00:19:01.506 }, 00:19:01.506 "auth": { 00:19:01.506 "state": "completed", 00:19:01.506 "digest": "sha384", 00:19:01.506 "dhgroup": "ffdhe6144" 00:19:01.506 } 00:19:01.506 } 00:19:01.506 ]' 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.506 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.767 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.027 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:02.027 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.598 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.859 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.120 00:19:03.120 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.120 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.120 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.380 { 00:19:03.380 "cntlid": 89, 00:19:03.380 "qid": 0, 00:19:03.380 "state": "enabled", 00:19:03.380 "thread": "nvmf_tgt_poll_group_000", 00:19:03.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.380 "listen_address": { 00:19:03.380 "trtype": "TCP", 00:19:03.380 "adrfam": "IPv4", 00:19:03.380 "traddr": "10.0.0.2", 00:19:03.380 "trsvcid": "4420" 00:19:03.380 }, 00:19:03.380 "peer_address": { 00:19:03.380 "trtype": "TCP", 00:19:03.380 "adrfam": "IPv4", 00:19:03.380 "traddr": "10.0.0.1", 00:19:03.380 "trsvcid": "46796" 00:19:03.380 }, 00:19:03.380 "auth": { 00:19:03.380 "state": "completed", 00:19:03.380 "digest": "sha384", 00:19:03.380 "dhgroup": "ffdhe8192" 00:19:03.380 } 00:19:03.380 } 00:19:03.380 ]' 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.380 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:03.641 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.585 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.157 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.157 { 00:19:05.157 "cntlid": 91, 00:19:05.157 "qid": 0, 00:19:05.157 "state": "enabled", 00:19:05.157 "thread": "nvmf_tgt_poll_group_000", 00:19:05.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.157 "listen_address": { 00:19:05.157 "trtype": "TCP", 00:19:05.157 "adrfam": "IPv4", 00:19:05.157 "traddr": "10.0.0.2", 00:19:05.157 "trsvcid": "4420" 00:19:05.157 }, 00:19:05.157 "peer_address": { 00:19:05.157 "trtype": "TCP", 00:19:05.157 "adrfam": "IPv4", 00:19:05.157 "traddr": "10.0.0.1", 00:19:05.157 "trsvcid": "46832" 00:19:05.157 }, 00:19:05.157 "auth": { 00:19:05.157 "state": "completed", 00:19:05.157 "digest": "sha384", 00:19:05.157 "dhgroup": "ffdhe8192" 00:19:05.157 } 00:19:05.157 } 00:19:05.157 ]' 00:19:05.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.417 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.676 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:05.676 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.247 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.508 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.078 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.078 { 00:19:07.078 "cntlid": 93, 00:19:07.078 "qid": 0, 00:19:07.078 "state": "enabled", 00:19:07.078 "thread": "nvmf_tgt_poll_group_000", 00:19:07.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.078 "listen_address": { 00:19:07.078 "trtype": "TCP", 00:19:07.078 "adrfam": "IPv4", 00:19:07.078 "traddr": "10.0.0.2", 00:19:07.078 "trsvcid": "4420" 00:19:07.078 }, 00:19:07.078 "peer_address": { 00:19:07.078 "trtype": "TCP", 00:19:07.078 "adrfam": "IPv4", 00:19:07.078 "traddr": "10.0.0.1", 00:19:07.078 "trsvcid": "46860" 00:19:07.078 }, 00:19:07.078 "auth": { 00:19:07.078 "state": "completed", 00:19:07.078 "digest": "sha384", 00:19:07.078 "dhgroup": "ffdhe8192" 00:19:07.078 } 00:19:07.078 } 00:19:07.078 ]' 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.078 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.338 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.338 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.338 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.338 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.338 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.598 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:07.598 10:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.169 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.430 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.690 00:19:08.690 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.690 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.690 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.951 { 00:19:08.951 "cntlid": 95, 00:19:08.951 "qid": 0, 00:19:08.951 "state": "enabled", 00:19:08.951 "thread": "nvmf_tgt_poll_group_000", 00:19:08.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.951 "listen_address": { 00:19:08.951 "trtype": "TCP", 00:19:08.951 "adrfam": "IPv4", 00:19:08.951 "traddr": "10.0.0.2", 00:19:08.951 "trsvcid": "4420" 00:19:08.951 }, 00:19:08.951 "peer_address": { 00:19:08.951 "trtype": "TCP", 00:19:08.951 "adrfam": "IPv4", 00:19:08.951 "traddr": "10.0.0.1", 00:19:08.951 "trsvcid": "46888" 00:19:08.951 }, 00:19:08.951 "auth": { 00:19:08.951 "state": "completed", 00:19:08.951 "digest": "sha384", 00:19:08.951 "dhgroup": "ffdhe8192" 00:19:08.951 } 00:19:08.951 } 00:19:08.951 ]' 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.951 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.211 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.211 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.211 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.211 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.211 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.471 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:09.471 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.040 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.300 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.300 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.559 { 00:19:10.559 "cntlid": 97, 00:19:10.559 "qid": 0, 00:19:10.559 "state": "enabled", 00:19:10.559 "thread": "nvmf_tgt_poll_group_000", 00:19:10.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:10.559 "listen_address": { 00:19:10.559 "trtype": "TCP", 00:19:10.559 "adrfam": "IPv4", 00:19:10.559 "traddr": "10.0.0.2", 00:19:10.559 "trsvcid": "4420" 00:19:10.559 }, 00:19:10.559 "peer_address": { 00:19:10.559 "trtype": "TCP", 00:19:10.559 "adrfam": "IPv4", 00:19:10.559 "traddr": "10.0.0.1", 00:19:10.559 "trsvcid": "46912" 00:19:10.559 }, 00:19:10.559 "auth": { 00:19:10.559 "state": "completed", 00:19:10.559 "digest": "sha512", 00:19:10.559 "dhgroup": "null" 00:19:10.559 } 00:19:10.559 } 00:19:10.559 ]' 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.559 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.819 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.819 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.819 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.819 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.819 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:10.819 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.758 10:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.758 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:11.758 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.759 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.018 00:19:12.018 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.018 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.018 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.278 { 00:19:12.278 "cntlid": 99, 00:19:12.278 "qid": 0, 00:19:12.278 "state": "enabled", 00:19:12.278 "thread": "nvmf_tgt_poll_group_000", 00:19:12.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.278 "listen_address": { 00:19:12.278 "trtype": "TCP", 00:19:12.278 "adrfam": "IPv4", 00:19:12.278 "traddr": "10.0.0.2", 00:19:12.278 "trsvcid": "4420" 00:19:12.278 }, 00:19:12.278 "peer_address": { 00:19:12.278 "trtype": "TCP", 00:19:12.278 "adrfam": "IPv4", 00:19:12.278 "traddr": "10.0.0.1", 00:19:12.278 "trsvcid": "46936" 00:19:12.278 }, 00:19:12.278 "auth": { 00:19:12.278 "state": "completed", 00:19:12.278 "digest": "sha512", 00:19:12.278 "dhgroup": "null" 00:19:12.278 } 00:19:12.278 } 00:19:12.278 ]' 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.278 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.537 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:12.537 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.107 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.367 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.627 00:19:13.627 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.627 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.627 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.887 { 00:19:13.887 "cntlid": 101, 00:19:13.887 "qid": 0, 00:19:13.887 "state": "enabled", 00:19:13.887 "thread": "nvmf_tgt_poll_group_000", 00:19:13.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.887 "listen_address": { 00:19:13.887 "trtype": "TCP", 00:19:13.887 "adrfam": "IPv4", 00:19:13.887 "traddr": "10.0.0.2", 00:19:13.887 "trsvcid": "4420" 00:19:13.887 }, 00:19:13.887 "peer_address": { 00:19:13.887 "trtype": "TCP", 00:19:13.887 "adrfam": "IPv4", 00:19:13.887 "traddr": "10.0.0.1", 00:19:13.887 "trsvcid": "56102" 00:19:13.887 }, 00:19:13.887 "auth": { 00:19:13.887 "state": "completed", 00:19:13.887 "digest": "sha512", 00:19:13.887 "dhgroup": "null" 00:19:13.887 } 00:19:13.887 } 00:19:13.887 ]' 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.887 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.147 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:14.147 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.716 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.975 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.234 00:19:15.234 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.234 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.234 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.533 { 00:19:15.533 "cntlid": 103, 00:19:15.533 "qid": 0, 00:19:15.533 "state": "enabled", 00:19:15.533 "thread": "nvmf_tgt_poll_group_000", 00:19:15.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.533 "listen_address": { 00:19:15.533 "trtype": "TCP", 00:19:15.533 "adrfam": "IPv4", 00:19:15.533 "traddr": "10.0.0.2", 00:19:15.533 "trsvcid": "4420" 00:19:15.533 }, 00:19:15.533 "peer_address": { 00:19:15.533 "trtype": "TCP", 00:19:15.533 "adrfam": "IPv4", 00:19:15.533 "traddr": "10.0.0.1", 00:19:15.533 "trsvcid": "56120" 00:19:15.533 }, 00:19:15.533 "auth": { 00:19:15.533 "state": "completed", 00:19:15.533 "digest": "sha512", 00:19:15.533 "dhgroup": "null" 00:19:15.533 } 00:19:15.533 } 00:19:15.533 ]' 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.533 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.792 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:15.793 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.361 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.620 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.879 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.879 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.138 { 00:19:17.138 "cntlid": 105, 00:19:17.138 "qid": 0, 00:19:17.138 "state": "enabled", 00:19:17.138 "thread": "nvmf_tgt_poll_group_000", 00:19:17.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.138 "listen_address": { 00:19:17.138 "trtype": "TCP", 00:19:17.138 "adrfam": "IPv4", 00:19:17.138 "traddr": "10.0.0.2", 00:19:17.138 "trsvcid": "4420" 00:19:17.138 }, 00:19:17.138 "peer_address": { 00:19:17.138 "trtype": "TCP", 00:19:17.138 "adrfam": "IPv4", 00:19:17.138 "traddr": "10.0.0.1", 00:19:17.138 "trsvcid": "56150" 00:19:17.138 }, 00:19:17.138 "auth": { 00:19:17.138 "state": "completed", 00:19:17.138 "digest": "sha512", 00:19:17.138 "dhgroup": "ffdhe2048" 00:19:17.138 } 00:19:17.138 } 00:19:17.138 ]' 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.138 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.398 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:17.398 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.968 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.228 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.488 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.488 { 00:19:18.488 "cntlid": 107, 00:19:18.488 "qid": 0, 00:19:18.488 "state": "enabled", 00:19:18.488 "thread": "nvmf_tgt_poll_group_000", 00:19:18.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.488 "listen_address": { 00:19:18.488 "trtype": "TCP", 00:19:18.488 "adrfam": "IPv4", 00:19:18.488 "traddr": "10.0.0.2", 00:19:18.488 "trsvcid": "4420" 00:19:18.488 }, 00:19:18.488 "peer_address": { 00:19:18.488 "trtype": "TCP", 00:19:18.488 "adrfam": "IPv4", 00:19:18.488 "traddr": "10.0.0.1", 00:19:18.488 "trsvcid": "56180" 00:19:18.488 }, 00:19:18.488 "auth": { 00:19:18.488 "state": "completed", 00:19:18.488 "digest": "sha512", 00:19:18.488 "dhgroup": "ffdhe2048" 00:19:18.488 } 00:19:18.488 } 00:19:18.488 ]' 00:19:18.488 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.748 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.007 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:19.007 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.577 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.578 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.578 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.838 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.098 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.098 { 00:19:20.098 "cntlid": 109, 00:19:20.098 "qid": 0, 00:19:20.098 "state": "enabled", 00:19:20.098 "thread": "nvmf_tgt_poll_group_000", 00:19:20.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.098 "listen_address": { 00:19:20.098 "trtype": "TCP", 00:19:20.098 "adrfam": "IPv4", 00:19:20.098 "traddr": "10.0.0.2", 00:19:20.098 "trsvcid": "4420" 00:19:20.098 }, 00:19:20.098 "peer_address": { 00:19:20.098 "trtype": "TCP", 00:19:20.098 "adrfam": "IPv4", 00:19:20.098 "traddr": "10.0.0.1", 00:19:20.098 "trsvcid": "56216" 00:19:20.098 }, 00:19:20.098 "auth": { 00:19:20.098 "state": "completed", 00:19:20.098 "digest": "sha512", 00:19:20.098 "dhgroup": "ffdhe2048" 00:19:20.098 } 00:19:20.098 } 00:19:20.098 ]' 00:19:20.098 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.358 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.618 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:20.618 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.189 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.449 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.708 00:19:21.708 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.708 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.708 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.708 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.708 { 00:19:21.708 "cntlid": 111, 00:19:21.708 "qid": 0, 00:19:21.708 "state": "enabled", 00:19:21.708 "thread": "nvmf_tgt_poll_group_000", 00:19:21.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.708 "listen_address": { 00:19:21.708 "trtype": "TCP", 00:19:21.708 "adrfam": "IPv4", 00:19:21.708 "traddr": "10.0.0.2", 00:19:21.708 "trsvcid": "4420" 00:19:21.708 }, 00:19:21.708 "peer_address": { 00:19:21.708 "trtype": "TCP", 00:19:21.708 "adrfam": "IPv4", 00:19:21.708 "traddr": "10.0.0.1", 00:19:21.708 "trsvcid": "56244" 00:19:21.708 }, 00:19:21.708 "auth": { 00:19:21.708 "state": "completed", 00:19:21.708 "digest": "sha512", 00:19:21.709 "dhgroup": "ffdhe2048" 00:19:21.709 } 00:19:21.709 } 00:19:21.709 ]' 00:19:21.709 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.968 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.228 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:22.228 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.914 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.181 00:19:23.181 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.181 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.181 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.438 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.438 { 00:19:23.438 "cntlid": 113, 00:19:23.438 "qid": 0, 00:19:23.438 "state": "enabled", 00:19:23.438 "thread": "nvmf_tgt_poll_group_000", 00:19:23.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.438 "listen_address": { 00:19:23.438 "trtype": "TCP", 00:19:23.438 "adrfam": "IPv4", 00:19:23.438 "traddr": "10.0.0.2", 00:19:23.438 "trsvcid": "4420" 00:19:23.438 }, 00:19:23.438 "peer_address": { 00:19:23.438 "trtype": "TCP", 00:19:23.438 "adrfam": "IPv4", 00:19:23.438 "traddr": "10.0.0.1", 00:19:23.439 "trsvcid": "33790" 00:19:23.439 }, 00:19:23.439 "auth": { 00:19:23.439 "state": "completed", 00:19:23.439 "digest": "sha512", 00:19:23.439 "dhgroup": "ffdhe3072" 00:19:23.439 } 00:19:23.439 } 00:19:23.439 ]' 00:19:23.439 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.439 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.439 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.439 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.439 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.698 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.698 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.698 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.698 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:23.698 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 10:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.899 00:19:24.899 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.899 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.899 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.159 { 00:19:25.159 "cntlid": 115, 00:19:25.159 "qid": 0, 00:19:25.159 "state": "enabled", 00:19:25.159 "thread": "nvmf_tgt_poll_group_000", 00:19:25.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.159 "listen_address": { 00:19:25.159 "trtype": "TCP", 00:19:25.159 "adrfam": "IPv4", 00:19:25.159 "traddr": "10.0.0.2", 00:19:25.159 "trsvcid": "4420" 00:19:25.159 }, 00:19:25.159 "peer_address": { 00:19:25.159 "trtype": "TCP", 00:19:25.159 "adrfam": "IPv4", 00:19:25.159 "traddr": "10.0.0.1", 00:19:25.159 "trsvcid": "33808" 00:19:25.159 }, 00:19:25.159 "auth": { 00:19:25.159 "state": "completed", 00:19:25.159 "digest": "sha512", 00:19:25.159 "dhgroup": "ffdhe3072" 00:19:25.159 } 00:19:25.159 } 00:19:25.159 ]' 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.159 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.418 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:25.418 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.986 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.245 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.504 00:19:26.504 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.504 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.504 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.764 { 00:19:26.764 "cntlid": 117, 00:19:26.764 "qid": 0, 00:19:26.764 "state": "enabled", 00:19:26.764 "thread": "nvmf_tgt_poll_group_000", 00:19:26.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.764 "listen_address": { 00:19:26.764 "trtype": "TCP", 00:19:26.764 "adrfam": "IPv4", 00:19:26.764 "traddr": "10.0.0.2", 00:19:26.764 "trsvcid": "4420" 00:19:26.764 }, 00:19:26.764 "peer_address": { 00:19:26.764 "trtype": "TCP", 00:19:26.764 "adrfam": "IPv4", 00:19:26.764 "traddr": "10.0.0.1", 00:19:26.764 "trsvcid": "33828" 00:19:26.764 }, 00:19:26.764 "auth": { 00:19:26.764 "state": "completed", 00:19:26.764 "digest": "sha512", 00:19:26.764 "dhgroup": "ffdhe3072" 00:19:26.764 } 00:19:26.764 } 00:19:26.764 ]' 00:19:26.764 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.764 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.023 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:27.023 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.592 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.851 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.851 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.111 00:19:28.111 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.111 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.111 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.370 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.370 { 00:19:28.370 "cntlid": 119, 00:19:28.370 "qid": 0, 00:19:28.370 "state": "enabled", 00:19:28.370 "thread": "nvmf_tgt_poll_group_000", 00:19:28.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.370 "listen_address": { 00:19:28.370 "trtype": "TCP", 00:19:28.370 "adrfam": "IPv4", 00:19:28.370 "traddr": "10.0.0.2", 00:19:28.371 "trsvcid": "4420" 00:19:28.371 }, 00:19:28.371 "peer_address": { 00:19:28.371 "trtype": "TCP", 00:19:28.371 "adrfam": "IPv4", 00:19:28.371 "traddr": "10.0.0.1", 00:19:28.371 "trsvcid": "33854" 00:19:28.371 }, 00:19:28.371 "auth": { 00:19:28.371 "state": "completed", 00:19:28.371 "digest": "sha512", 00:19:28.371 "dhgroup": "ffdhe3072" 00:19:28.371 } 00:19:28.371 } 00:19:28.371 ]' 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.371 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.630 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:28.630 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.199 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.459 10:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.719 00:19:29.719 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.719 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.719 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.978 { 00:19:29.978 "cntlid": 121, 00:19:29.978 "qid": 0, 00:19:29.978 "state": "enabled", 00:19:29.978 "thread": "nvmf_tgt_poll_group_000", 00:19:29.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.978 "listen_address": { 00:19:29.978 "trtype": "TCP", 00:19:29.978 "adrfam": "IPv4", 00:19:29.978 "traddr": "10.0.0.2", 00:19:29.978 "trsvcid": "4420" 00:19:29.978 }, 00:19:29.978 "peer_address": { 00:19:29.978 "trtype": "TCP", 00:19:29.978 "adrfam": "IPv4", 00:19:29.978 "traddr": "10.0.0.1", 00:19:29.978 "trsvcid": "33888" 00:19:29.978 }, 00:19:29.978 "auth": { 00:19:29.978 "state": "completed", 00:19:29.978 "digest": "sha512", 00:19:29.978 "dhgroup": "ffdhe4096" 00:19:29.978 } 00:19:29.978 } 00:19:29.978 ]' 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.978 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.238 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.238 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.238 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.238 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:30.238 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.176 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.435 00:19:31.435 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.435 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.435 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.695 { 00:19:31.695 "cntlid": 123, 00:19:31.695 "qid": 0, 00:19:31.695 "state": "enabled", 00:19:31.695 "thread": "nvmf_tgt_poll_group_000", 00:19:31.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.695 "listen_address": { 00:19:31.695 "trtype": "TCP", 00:19:31.695 "adrfam": "IPv4", 00:19:31.695 "traddr": "10.0.0.2", 00:19:31.695 "trsvcid": "4420" 00:19:31.695 }, 00:19:31.695 "peer_address": { 00:19:31.695 "trtype": "TCP", 00:19:31.695 "adrfam": "IPv4", 00:19:31.695 "traddr": "10.0.0.1", 00:19:31.695 "trsvcid": "33902" 00:19:31.695 }, 00:19:31.695 "auth": { 00:19:31.695 "state": "completed", 00:19:31.695 "digest": "sha512", 00:19:31.695 "dhgroup": "ffdhe4096" 00:19:31.695 } 00:19:31.695 } 00:19:31.695 ]' 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.695 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.955 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:31.955 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.525 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.786 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.045 00:19:33.045 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.045 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.045 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.306 { 00:19:33.306 "cntlid": 125, 00:19:33.306 "qid": 0, 00:19:33.306 "state": "enabled", 00:19:33.306 "thread": "nvmf_tgt_poll_group_000", 00:19:33.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.306 "listen_address": { 00:19:33.306 "trtype": "TCP", 00:19:33.306 "adrfam": "IPv4", 00:19:33.306 "traddr": "10.0.0.2", 00:19:33.306 "trsvcid": "4420" 00:19:33.306 }, 00:19:33.306 "peer_address": { 00:19:33.306 "trtype": "TCP", 00:19:33.306 "adrfam": "IPv4", 00:19:33.306 "traddr": "10.0.0.1", 00:19:33.306 "trsvcid": "54464" 00:19:33.306 }, 00:19:33.306 "auth": { 00:19:33.306 "state": "completed", 00:19:33.306 "digest": "sha512", 00:19:33.306 "dhgroup": "ffdhe4096" 00:19:33.306 } 00:19:33.306 } 00:19:33.306 ]' 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.306 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.566 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:33.566 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:34.136 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.396 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.657 00:19:34.657 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.657 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.657 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.917 { 00:19:34.917 "cntlid": 127, 00:19:34.917 "qid": 0, 00:19:34.917 "state": "enabled", 00:19:34.917 "thread": "nvmf_tgt_poll_group_000", 00:19:34.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.917 "listen_address": { 00:19:34.917 "trtype": "TCP", 00:19:34.917 "adrfam": "IPv4", 00:19:34.917 "traddr": "10.0.0.2", 00:19:34.917 "trsvcid": "4420" 00:19:34.917 }, 00:19:34.917 "peer_address": { 00:19:34.917 "trtype": "TCP", 00:19:34.917 "adrfam": "IPv4", 00:19:34.917 "traddr": "10.0.0.1", 00:19:34.917 "trsvcid": "54484" 00:19:34.917 }, 00:19:34.917 "auth": { 00:19:34.917 "state": "completed", 00:19:34.917 "digest": "sha512", 00:19:34.917 "dhgroup": "ffdhe4096" 00:19:34.917 } 00:19:34.917 } 00:19:34.917 ]' 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.917 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.178 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:35.178 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.746 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.005 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.265 00:19:36.265 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.265 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.266 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.525 { 00:19:36.525 "cntlid": 129, 00:19:36.525 "qid": 0, 00:19:36.525 "state": "enabled", 00:19:36.525 "thread": "nvmf_tgt_poll_group_000", 00:19:36.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.525 "listen_address": { 00:19:36.525 "trtype": "TCP", 00:19:36.525 "adrfam": "IPv4", 00:19:36.525 "traddr": "10.0.0.2", 00:19:36.525 "trsvcid": "4420" 00:19:36.525 }, 00:19:36.525 "peer_address": { 00:19:36.525 "trtype": "TCP", 00:19:36.525 "adrfam": "IPv4", 00:19:36.525 "traddr": "10.0.0.1", 00:19:36.525 "trsvcid": "54512" 00:19:36.525 }, 00:19:36.525 "auth": { 00:19:36.525 "state": "completed", 00:19:36.525 "digest": "sha512", 00:19:36.525 "dhgroup": "ffdhe6144" 00:19:36.525 } 00:19:36.525 } 00:19:36.525 ]' 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.525 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.784 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.784 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.784 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.784 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:36.784 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.722 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.981 00:19:37.981 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.981 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.981 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.240 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.240 { 00:19:38.240 "cntlid": 131, 00:19:38.240 "qid": 0, 00:19:38.240 "state": "enabled", 00:19:38.240 "thread": "nvmf_tgt_poll_group_000", 00:19:38.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.240 "listen_address": { 00:19:38.240 "trtype": "TCP", 00:19:38.240 "adrfam": "IPv4", 00:19:38.240 "traddr": "10.0.0.2", 00:19:38.240 "trsvcid": "4420" 00:19:38.240 }, 00:19:38.240 "peer_address": { 00:19:38.240 "trtype": "TCP", 00:19:38.240 "adrfam": "IPv4", 00:19:38.240 "traddr": "10.0.0.1", 00:19:38.240 "trsvcid": "54528" 00:19:38.240 }, 00:19:38.240 "auth": { 00:19:38.240 "state": "completed", 00:19:38.240 "digest": "sha512", 00:19:38.240 "dhgroup": "ffdhe6144" 00:19:38.241 } 00:19:38.241 } 00:19:38.241 ]' 00:19:38.241 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.241 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.241 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.241 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.241 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.500 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.500 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.500 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.500 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:38.500 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.438 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.699 00:19:39.699 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.699 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.699 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.958 { 00:19:39.958 "cntlid": 133, 00:19:39.958 "qid": 0, 00:19:39.958 "state": "enabled", 00:19:39.958 "thread": "nvmf_tgt_poll_group_000", 00:19:39.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.958 "listen_address": { 00:19:39.958 "trtype": "TCP", 00:19:39.958 "adrfam": "IPv4", 00:19:39.958 "traddr": "10.0.0.2", 00:19:39.958 "trsvcid": "4420" 00:19:39.958 }, 00:19:39.958 "peer_address": { 00:19:39.958 "trtype": "TCP", 00:19:39.958 "adrfam": "IPv4", 00:19:39.958 "traddr": "10.0.0.1", 00:19:39.958 "trsvcid": "54558" 00:19:39.958 }, 00:19:39.958 "auth": { 00:19:39.958 "state": "completed", 00:19:39.958 "digest": "sha512", 00:19:39.958 "dhgroup": "ffdhe6144" 00:19:39.958 } 00:19:39.958 } 00:19:39.958 ]' 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.958 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.218 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:40.218 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:41.155 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.155 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.155 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.155 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.155 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.156 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.415 00:19:41.415 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.415 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.415 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.675 { 00:19:41.675 "cntlid": 135, 00:19:41.675 "qid": 0, 00:19:41.675 "state": "enabled", 00:19:41.675 "thread": "nvmf_tgt_poll_group_000", 00:19:41.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.675 "listen_address": { 00:19:41.675 "trtype": "TCP", 00:19:41.675 "adrfam": "IPv4", 00:19:41.675 "traddr": "10.0.0.2", 00:19:41.675 "trsvcid": "4420" 00:19:41.675 }, 00:19:41.675 "peer_address": { 00:19:41.675 "trtype": "TCP", 00:19:41.675 "adrfam": "IPv4", 00:19:41.675 "traddr": "10.0.0.1", 00:19:41.675 "trsvcid": "54578" 00:19:41.675 }, 00:19:41.675 "auth": { 00:19:41.675 "state": "completed", 00:19:41.675 "digest": "sha512", 00:19:41.675 "dhgroup": "ffdhe6144" 00:19:41.675 } 00:19:41.675 } 00:19:41.675 ]' 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.675 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.675 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.675 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.675 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.934 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:41.934 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:42.871 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.872 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.872 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.440 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.440 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.440 { 00:19:43.440 "cntlid": 137, 00:19:43.440 "qid": 0, 00:19:43.440 "state": "enabled", 00:19:43.440 "thread": "nvmf_tgt_poll_group_000", 00:19:43.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.440 "listen_address": { 00:19:43.440 "trtype": "TCP", 00:19:43.440 "adrfam": "IPv4", 00:19:43.440 "traddr": "10.0.0.2", 00:19:43.441 "trsvcid": "4420" 00:19:43.441 }, 00:19:43.441 "peer_address": { 00:19:43.441 "trtype": "TCP", 00:19:43.441 "adrfam": "IPv4", 00:19:43.441 "traddr": "10.0.0.1", 00:19:43.441 "trsvcid": "53784" 00:19:43.441 }, 00:19:43.441 "auth": { 00:19:43.441 "state": "completed", 00:19:43.441 "digest": "sha512", 00:19:43.441 "dhgroup": "ffdhe8192" 00:19:43.441 } 00:19:43.441 } 00:19:43.441 ]' 00:19:43.441 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.700 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.960 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:43.960 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.529 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.789 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.357 00:19:45.357 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.357 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.357 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.357 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.357 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.358 { 00:19:45.358 "cntlid": 139, 00:19:45.358 "qid": 0, 00:19:45.358 "state": "enabled", 00:19:45.358 "thread": "nvmf_tgt_poll_group_000", 00:19:45.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.358 "listen_address": { 00:19:45.358 "trtype": "TCP", 00:19:45.358 "adrfam": "IPv4", 00:19:45.358 "traddr": "10.0.0.2", 00:19:45.358 "trsvcid": "4420" 00:19:45.358 }, 00:19:45.358 "peer_address": { 00:19:45.358 "trtype": "TCP", 00:19:45.358 "adrfam": "IPv4", 00:19:45.358 "traddr": "10.0.0.1", 00:19:45.358 "trsvcid": "53814" 00:19:45.358 }, 00:19:45.358 "auth": { 00:19:45.358 "state": "completed", 00:19:45.358 "digest": "sha512", 00:19:45.358 "dhgroup": "ffdhe8192" 00:19:45.358 } 00:19:45.358 } 00:19:45.358 ]' 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.358 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.617 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.617 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.617 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.617 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:45.617 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: --dhchap-ctrl-secret DHHC-1:02:MGM2NzMzZDdkNTRmMWQxNTc4M2VmMjg4YzQwNDE2MTBhYmY3ZTE5OGNhODYwZDAwEW68pw==: 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.125 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.125 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.125 { 00:19:47.125 "cntlid": 141, 00:19:47.125 "qid": 0, 00:19:47.125 "state": "enabled", 00:19:47.125 "thread": "nvmf_tgt_poll_group_000", 00:19:47.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.125 "listen_address": { 00:19:47.125 "trtype": "TCP", 00:19:47.125 "adrfam": "IPv4", 00:19:47.125 "traddr": "10.0.0.2", 00:19:47.125 "trsvcid": "4420" 00:19:47.125 }, 00:19:47.125 "peer_address": { 00:19:47.125 "trtype": "TCP", 00:19:47.125 "adrfam": "IPv4", 00:19:47.125 "traddr": "10.0.0.1", 00:19:47.125 "trsvcid": "53832" 00:19:47.125 }, 00:19:47.126 "auth": { 00:19:47.126 "state": "completed", 00:19:47.126 "digest": "sha512", 00:19:47.126 "dhgroup": "ffdhe8192" 00:19:47.126 } 00:19:47.126 } 00:19:47.126 ]' 00:19:47.126 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.385 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.682 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:47.682 10:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:01:MzVmODgwZDljYmZiZmVlOTU0ODAwNDRmM2NhOTA1MjXTiUU4: 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.272 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.532 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.532 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.532 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.791 00:19:48.791 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.791 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.791 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.051 { 00:19:49.051 "cntlid": 143, 00:19:49.051 "qid": 0, 00:19:49.051 "state": "enabled", 00:19:49.051 "thread": "nvmf_tgt_poll_group_000", 00:19:49.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.051 "listen_address": { 00:19:49.051 "trtype": "TCP", 00:19:49.051 "adrfam": "IPv4", 00:19:49.051 "traddr": "10.0.0.2", 00:19:49.051 "trsvcid": "4420" 00:19:49.051 }, 00:19:49.051 "peer_address": { 00:19:49.051 "trtype": "TCP", 00:19:49.051 "adrfam": "IPv4", 00:19:49.051 "traddr": "10.0.0.1", 00:19:49.051 "trsvcid": "53848" 00:19:49.051 }, 00:19:49.051 "auth": { 00:19:49.051 "state": "completed", 00:19:49.051 "digest": "sha512", 00:19:49.051 "dhgroup": "ffdhe8192" 00:19:49.051 } 00:19:49.051 } 00:19:49.051 ]' 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.051 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.310 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.310 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.310 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.310 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.310 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:49.311 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.248 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.818 00:19:50.818 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.818 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.818 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.818 { 00:19:50.818 "cntlid": 145, 00:19:50.818 "qid": 0, 00:19:50.818 "state": "enabled", 00:19:50.818 "thread": "nvmf_tgt_poll_group_000", 00:19:50.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.818 "listen_address": { 00:19:50.818 "trtype": "TCP", 00:19:50.818 "adrfam": "IPv4", 00:19:50.818 "traddr": "10.0.0.2", 00:19:50.818 "trsvcid": "4420" 00:19:50.818 }, 00:19:50.818 "peer_address": { 00:19:50.818 "trtype": "TCP", 00:19:50.818 "adrfam": "IPv4", 00:19:50.818 "traddr": "10.0.0.1", 00:19:50.818 "trsvcid": "53866" 00:19:50.818 }, 00:19:50.818 "auth": { 00:19:50.818 "state": "completed", 00:19:50.818 "digest": "sha512", 00:19:50.818 "dhgroup": "ffdhe8192" 00:19:50.818 } 00:19:50.818 } 00:19:50.818 ]' 00:19:50.818 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.077 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.335 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:51.335 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NjRmYWUxNWNmN2NiNmQ5MDZmNzhhYWRiNDNjZDNjZmVjMjAxZWIzMmI5NWYwM2IzDSgyNw==: --dhchap-ctrl-secret DHHC-1:03:M2RlMmJjNGM0OWFiZDkyMTBjNDY3ZTkzZTgwZmViZGEwYjZlMDI1OTdmMTEwZDU4YTI0ZmQzNjBkZjMwZjAxY13F8Ig=: 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:51.903 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:52.472 request: 00:19:52.472 { 00:19:52.472 "name": "nvme0", 00:19:52.472 "trtype": "tcp", 00:19:52.472 "traddr": "10.0.0.2", 00:19:52.472 "adrfam": "ipv4", 00:19:52.472 "trsvcid": "4420", 00:19:52.472 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.472 "prchk_reftag": false, 00:19:52.472 "prchk_guard": false, 00:19:52.472 "hdgst": false, 00:19:52.472 "ddgst": false, 00:19:52.472 "dhchap_key": "key2", 00:19:52.472 "allow_unrecognized_csi": false, 00:19:52.472 "method": "bdev_nvme_attach_controller", 00:19:52.472 "req_id": 1 00:19:52.472 } 00:19:52.472 Got JSON-RPC error response 00:19:52.472 response: 00:19:52.472 { 00:19:52.472 "code": -5, 00:19:52.472 "message": "Input/output error" 00:19:52.472 } 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:52.472 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:52.731 request: 00:19:52.731 { 00:19:52.731 "name": "nvme0", 00:19:52.731 "trtype": "tcp", 00:19:52.731 "traddr": "10.0.0.2", 00:19:52.731 "adrfam": "ipv4", 00:19:52.731 "trsvcid": "4420", 00:19:52.731 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.731 "prchk_reftag": false, 00:19:52.731 "prchk_guard": false, 00:19:52.731 "hdgst": false, 00:19:52.731 "ddgst": false, 00:19:52.731 "dhchap_key": "key1", 00:19:52.731 "dhchap_ctrlr_key": "ckey2", 00:19:52.731 "allow_unrecognized_csi": false, 00:19:52.731 "method": "bdev_nvme_attach_controller", 00:19:52.731 "req_id": 1 00:19:52.731 } 00:19:52.731 Got JSON-RPC error response 00:19:52.731 response: 00:19:52.731 { 00:19:52.731 "code": -5, 00:19:52.731 "message": "Input/output error" 00:19:52.731 } 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.731 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.732 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.300 request: 00:19:53.300 { 00:19:53.300 "name": "nvme0", 00:19:53.300 "trtype": "tcp", 00:19:53.300 "traddr": "10.0.0.2", 00:19:53.300 "adrfam": "ipv4", 00:19:53.300 "trsvcid": "4420", 00:19:53.300 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.300 "prchk_reftag": false, 00:19:53.300 "prchk_guard": false, 00:19:53.300 "hdgst": false, 00:19:53.300 "ddgst": false, 00:19:53.300 "dhchap_key": "key1", 00:19:53.300 "dhchap_ctrlr_key": "ckey1", 00:19:53.300 "allow_unrecognized_csi": false, 00:19:53.300 "method": "bdev_nvme_attach_controller", 00:19:53.300 "req_id": 1 00:19:53.300 } 00:19:53.300 Got JSON-RPC error response 00:19:53.300 response: 00:19:53.300 { 00:19:53.300 "code": -5, 00:19:53.300 "message": "Input/output error" 00:19:53.300 } 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2025056 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2025056 ']' 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2025056 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2025056 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2025056' 00:19:53.300 killing process with pid 2025056 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2025056 00:19:53.300 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2025056 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2051393 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2051393 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2051393 ']' 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.560 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2051393 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2051393 ']' 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.498 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.498 null0 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DXX 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.U1Q ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.U1Q 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QTK 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.OYR ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OYR 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vxS 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.LS1 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LS1 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.O5p 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.758 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.695 nvme0n1 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.695 { 00:19:55.695 "cntlid": 1, 00:19:55.695 "qid": 0, 00:19:55.695 "state": "enabled", 00:19:55.695 "thread": "nvmf_tgt_poll_group_000", 00:19:55.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.695 "listen_address": { 00:19:55.695 "trtype": "TCP", 00:19:55.695 "adrfam": "IPv4", 00:19:55.695 "traddr": "10.0.0.2", 00:19:55.695 "trsvcid": "4420" 00:19:55.695 }, 00:19:55.695 "peer_address": { 00:19:55.695 "trtype": "TCP", 00:19:55.695 "adrfam": "IPv4", 00:19:55.695 "traddr": "10.0.0.1", 00:19:55.695 "trsvcid": "53054" 00:19:55.695 }, 00:19:55.695 "auth": { 00:19:55.695 "state": "completed", 00:19:55.695 "digest": "sha512", 00:19:55.695 "dhgroup": "ffdhe8192" 00:19:55.695 } 00:19:55.695 } 00:19:55.695 ]' 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.695 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.695 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.695 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.695 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.695 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.695 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.954 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:55.954 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:56.523 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:56.783 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.783 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.042 request: 00:19:57.042 { 00:19:57.042 "name": "nvme0", 00:19:57.042 "trtype": "tcp", 00:19:57.042 "traddr": "10.0.0.2", 00:19:57.042 "adrfam": "ipv4", 00:19:57.042 "trsvcid": "4420", 00:19:57.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.042 "prchk_reftag": false, 00:19:57.042 "prchk_guard": false, 00:19:57.042 "hdgst": false, 00:19:57.042 "ddgst": false, 00:19:57.042 "dhchap_key": "key3", 00:19:57.042 "allow_unrecognized_csi": false, 00:19:57.042 "method": "bdev_nvme_attach_controller", 00:19:57.042 "req_id": 1 00:19:57.042 } 00:19:57.042 Got JSON-RPC error response 00:19:57.042 response: 00:19:57.042 { 00:19:57.042 "code": -5, 00:19:57.042 "message": "Input/output error" 00:19:57.042 } 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:57.042 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.302 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.561 request: 00:19:57.561 { 00:19:57.562 "name": "nvme0", 00:19:57.562 "trtype": "tcp", 00:19:57.562 "traddr": "10.0.0.2", 00:19:57.562 "adrfam": "ipv4", 00:19:57.562 "trsvcid": "4420", 00:19:57.562 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.562 "prchk_reftag": false, 00:19:57.562 "prchk_guard": false, 00:19:57.562 "hdgst": false, 00:19:57.562 "ddgst": false, 00:19:57.562 "dhchap_key": "key3", 00:19:57.562 "allow_unrecognized_csi": false, 00:19:57.562 "method": "bdev_nvme_attach_controller", 00:19:57.562 "req_id": 1 00:19:57.562 } 00:19:57.562 Got JSON-RPC error response 00:19:57.562 response: 00:19:57.562 { 00:19:57.562 "code": -5, 00:19:57.562 "message": "Input/output error" 00:19:57.562 } 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:57.562 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.132 request: 00:19:58.132 { 00:19:58.132 "name": "nvme0", 00:19:58.132 "trtype": "tcp", 00:19:58.132 "traddr": "10.0.0.2", 00:19:58.132 "adrfam": "ipv4", 00:19:58.132 "trsvcid": "4420", 00:19:58.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.132 "prchk_reftag": false, 00:19:58.132 "prchk_guard": false, 00:19:58.132 "hdgst": false, 00:19:58.132 "ddgst": false, 00:19:58.132 "dhchap_key": "key0", 00:19:58.132 "dhchap_ctrlr_key": "key1", 00:19:58.133 "allow_unrecognized_csi": false, 00:19:58.133 "method": "bdev_nvme_attach_controller", 00:19:58.133 "req_id": 1 00:19:58.133 } 00:19:58.133 Got JSON-RPC error response 00:19:58.133 response: 00:19:58.133 { 00:19:58.133 "code": -5, 00:19:58.133 "message": "Input/output error" 00:19:58.133 } 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:58.133 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:58.133 nvme0n1 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.392 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:58.652 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:59.593 nvme0n1 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:59.593 10:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.854 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.854 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:19:59.854 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: --dhchap-ctrl-secret DHHC-1:03:YjA3NjI5MjE0NThiMmFmMWQ2MmM2ZmM2ZDQzZDRjN2U1ZmYyODk2NmQwYzA0YTdjNjAyM2I3MDk4NjljMzIzYh+g9vo=: 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.425 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:00.684 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:00.944 request: 00:20:00.944 { 00:20:00.944 "name": "nvme0", 00:20:00.944 "trtype": "tcp", 00:20:00.944 "traddr": "10.0.0.2", 00:20:00.944 "adrfam": "ipv4", 00:20:00.944 "trsvcid": "4420", 00:20:00.944 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:00.944 "prchk_reftag": false, 00:20:00.944 "prchk_guard": false, 00:20:00.944 "hdgst": false, 00:20:00.944 "ddgst": false, 00:20:00.944 "dhchap_key": "key1", 00:20:00.944 "allow_unrecognized_csi": false, 00:20:00.944 "method": "bdev_nvme_attach_controller", 00:20:00.944 "req_id": 1 00:20:00.944 } 00:20:00.944 Got JSON-RPC error response 00:20:00.944 response: 00:20:00.944 { 00:20:00.944 "code": -5, 00:20:00.944 "message": "Input/output error" 00:20:00.944 } 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:00.944 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:01.884 nvme0n1 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.884 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:02.145 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:02.404 nvme0n1 00:20:02.404 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:02.404 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.404 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.665 10:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: '' 2s 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: ]] 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWNhNzc1ZTU1NDA3NDk5OGJiZWU1NTdkMWZkMzA1YzaJa9zH: 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:02.665 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: 2s 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: ]] 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjQxNjQ3MWM0ZTNmNGEyMWRlMDE4YTM0YWI2MmQwMGE2NTIzNzk4MWRiNjVlNmNihfZx8w==: 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:05.206 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:07.149 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:07.719 nvme0n1 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:07.719 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:08.289 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:08.549 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:09.119 request: 00:20:09.119 { 00:20:09.119 "name": "nvme0", 00:20:09.119 "dhchap_key": "key1", 00:20:09.119 "dhchap_ctrlr_key": "key3", 00:20:09.119 "method": "bdev_nvme_set_keys", 00:20:09.119 "req_id": 1 00:20:09.119 } 00:20:09.119 Got JSON-RPC error response 00:20:09.119 response: 00:20:09.119 { 00:20:09.119 "code": -13, 00:20:09.119 "message": "Permission denied" 00:20:09.119 } 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:09.119 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.379 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:09.379 10:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:10.318 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:10.318 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:10.318 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:10.578 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:11.148 nvme0n1 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:11.148 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:11.719 request: 00:20:11.719 { 00:20:11.719 "name": "nvme0", 00:20:11.719 "dhchap_key": "key2", 00:20:11.719 "dhchap_ctrlr_key": "key0", 00:20:11.719 "method": "bdev_nvme_set_keys", 00:20:11.719 "req_id": 1 00:20:11.719 } 00:20:11.719 Got JSON-RPC error response 00:20:11.719 response: 00:20:11.719 { 00:20:11.719 "code": -13, 00:20:11.719 "message": "Permission denied" 00:20:11.719 } 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:11.719 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.979 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:11.979 10:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2025395 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2025395 ']' 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2025395 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2025395 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2025395' 00:20:13.083 killing process with pid 2025395 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2025395 00:20:13.083 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2025395 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.344 rmmod nvme_tcp 00:20:13.344 rmmod nvme_fabrics 00:20:13.344 rmmod nvme_keyring 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2051393 ']' 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2051393 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2051393 ']' 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2051393 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051393 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051393' 00:20:13.344 killing process with pid 2051393 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2051393 00:20:13.344 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2051393 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.605 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.515 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.515 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DXX /tmp/spdk.key-sha256.QTK /tmp/spdk.key-sha384.vxS /tmp/spdk.key-sha512.O5p /tmp/spdk.key-sha512.U1Q /tmp/spdk.key-sha384.OYR /tmp/spdk.key-sha256.LS1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:15.515 00:20:15.515 real 2m37.228s 00:20:15.515 user 5m53.935s 00:20:15.515 sys 0m24.831s 00:20:15.515 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.515 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.515 ************************************ 00:20:15.515 END TEST nvmf_auth_target 00:20:15.515 ************************************ 00:20:15.775 10:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:15.776 10:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:15.776 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:15.776 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.776 10:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.776 ************************************ 00:20:15.776 START TEST nvmf_bdevio_no_huge 00:20:15.776 ************************************ 00:20:15.776 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:15.776 * Looking for test storage... 00:20:15.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:15.776 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:16.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.037 --rc genhtml_branch_coverage=1 00:20:16.037 --rc genhtml_function_coverage=1 00:20:16.037 --rc genhtml_legend=1 00:20:16.037 --rc geninfo_all_blocks=1 00:20:16.037 --rc geninfo_unexecuted_blocks=1 00:20:16.037 00:20:16.037 ' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:16.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.037 --rc genhtml_branch_coverage=1 00:20:16.037 --rc genhtml_function_coverage=1 00:20:16.037 --rc genhtml_legend=1 00:20:16.037 --rc geninfo_all_blocks=1 00:20:16.037 --rc geninfo_unexecuted_blocks=1 00:20:16.037 00:20:16.037 ' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:16.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.037 --rc genhtml_branch_coverage=1 00:20:16.037 --rc genhtml_function_coverage=1 00:20:16.037 --rc genhtml_legend=1 00:20:16.037 --rc geninfo_all_blocks=1 00:20:16.037 --rc geninfo_unexecuted_blocks=1 00:20:16.037 00:20:16.037 ' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:16.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.037 --rc genhtml_branch_coverage=1 00:20:16.037 --rc genhtml_function_coverage=1 00:20:16.037 --rc genhtml_legend=1 00:20:16.037 --rc geninfo_all_blocks=1 00:20:16.037 --rc geninfo_unexecuted_blocks=1 00:20:16.037 00:20:16.037 ' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.037 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.038 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:24.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:24.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:24.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:24.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.189 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:24.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:20:24.190 00:20:24.190 --- 10.0.0.2 ping statistics --- 00:20:24.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.190 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:20:24.190 00:20:24.190 --- 10.0.0.1 ping statistics --- 00:20:24.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.190 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2059569 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2059569 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2059569 ']' 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.190 10:37:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.190 [2024-11-20 10:37:55.783413] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:20:24.190 [2024-11-20 10:37:55.783484] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:24.190 [2024-11-20 10:37:55.891269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.190 [2024-11-20 10:37:55.952418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.190 [2024-11-20 10:37:55.952470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.190 [2024-11-20 10:37:55.952478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.190 [2024-11-20 10:37:55.952485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.190 [2024-11-20 10:37:55.952492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.190 [2024-11-20 10:37:55.954044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:24.190 [2024-11-20 10:37:55.954221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:24.190 [2024-11-20 10:37:55.954388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:24.190 [2024-11-20 10:37:55.954389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 [2024-11-20 10:37:56.663774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 Malloc0 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.451 [2024-11-20 10:37:56.717583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:24.451 { 00:20:24.451 "params": { 00:20:24.451 "name": "Nvme$subsystem", 00:20:24.451 "trtype": "$TEST_TRANSPORT", 00:20:24.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.451 "adrfam": "ipv4", 00:20:24.451 "trsvcid": "$NVMF_PORT", 00:20:24.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.451 "hdgst": ${hdgst:-false}, 00:20:24.451 "ddgst": ${ddgst:-false} 00:20:24.451 }, 00:20:24.451 "method": "bdev_nvme_attach_controller" 00:20:24.451 } 00:20:24.451 EOF 00:20:24.451 )") 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:24.451 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:24.451 "params": { 00:20:24.451 "name": "Nvme1", 00:20:24.451 "trtype": "tcp", 00:20:24.451 "traddr": "10.0.0.2", 00:20:24.451 "adrfam": "ipv4", 00:20:24.451 "trsvcid": "4420", 00:20:24.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.451 "hdgst": false, 00:20:24.451 "ddgst": false 00:20:24.451 }, 00:20:24.451 "method": "bdev_nvme_attach_controller" 00:20:24.451 }' 00:20:24.451 [2024-11-20 10:37:56.776336] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:20:24.451 [2024-11-20 10:37:56.776405] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2059892 ] 00:20:24.712 [2024-11-20 10:37:56.874197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.712 [2024-11-20 10:37:56.937015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.712 [2024-11-20 10:37:56.937208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.712 [2024-11-20 10:37:56.937249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.972 I/O targets: 00:20:24.972 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:24.972 00:20:24.972 00:20:24.972 CUnit - A unit testing framework for C - Version 2.1-3 00:20:24.972 http://cunit.sourceforge.net/ 00:20:24.972 00:20:24.972 00:20:24.972 Suite: bdevio tests on: Nvme1n1 00:20:24.972 Test: blockdev write read block ...passed 00:20:24.972 Test: blockdev write zeroes read block ...passed 00:20:24.972 Test: blockdev write zeroes read no split ...passed 00:20:24.972 Test: blockdev write zeroes read split ...passed 00:20:24.972 Test: blockdev write zeroes read split partial ...passed 00:20:24.972 Test: blockdev reset ...[2024-11-20 10:37:57.330056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:24.973 [2024-11-20 10:37:57.330166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfd800 (9): Bad file descriptor 00:20:25.234 [2024-11-20 10:37:57.467641] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:25.234 passed 00:20:25.234 Test: blockdev write read 8 blocks ...passed 00:20:25.234 Test: blockdev write read size > 128k ...passed 00:20:25.234 Test: blockdev write read invalid size ...passed 00:20:25.234 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:25.234 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:25.234 Test: blockdev write read max offset ...passed 00:20:25.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:25.495 Test: blockdev writev readv 8 blocks ...passed 00:20:25.495 Test: blockdev writev readv 30 x 1block ...passed 00:20:25.495 Test: blockdev writev readv block ...passed 00:20:25.495 Test: blockdev writev readv size > 128k ...passed 00:20:25.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:25.495 Test: blockdev comparev and writev ...[2024-11-20 10:37:57.694455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.495 [2024-11-20 10:37:57.694506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.495 [2024-11-20 10:37:57.694522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.495 [2024-11-20 10:37:57.694532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.495 [2024-11-20 10:37:57.695056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.495 [2024-11-20 10:37:57.695072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.495 [2024-11-20 10:37:57.695086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.495 [2024-11-20 10:37:57.695094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.495 [2024-11-20 10:37:57.695500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.495 [2024-11-20 10:37:57.695515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.695530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.496 [2024-11-20 10:37:57.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.695920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.496 [2024-11-20 10:37:57.695933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.695948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:25.496 [2024-11-20 10:37:57.695968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.496 passed 00:20:25.496 Test: blockdev nvme passthru rw ...passed 00:20:25.496 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:37:57.780046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.496 [2024-11-20 10:37:57.780065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.780455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.496 [2024-11-20 10:37:57.780468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.780864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.496 [2024-11-20 10:37:57.780877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.496 [2024-11-20 10:37:57.781278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.496 [2024-11-20 10:37:57.781291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.496 passed 00:20:25.496 Test: blockdev nvme admin passthru ...passed 00:20:25.496 Test: blockdev copy ...passed 00:20:25.496 00:20:25.496 Run Summary: Type Total Ran Passed Failed Inactive 00:20:25.496 suites 1 1 n/a 0 0 00:20:25.496 tests 23 23 23 0 0 00:20:25.496 asserts 152 152 152 0 n/a 00:20:25.496 00:20:25.496 Elapsed time = 1.293 seconds 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:25.756 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.016 rmmod nvme_tcp 00:20:26.016 rmmod nvme_fabrics 00:20:26.016 rmmod nvme_keyring 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2059569 ']' 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2059569 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2059569 ']' 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2059569 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059569 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059569' 00:20:26.016 killing process with pid 2059569 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2059569 00:20:26.016 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2059569 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.276 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.818 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.818 00:20:28.818 real 0m12.612s 00:20:28.818 user 0m14.635s 00:20:28.818 sys 0m6.684s 00:20:28.818 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.818 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:28.818 ************************************ 00:20:28.818 END TEST nvmf_bdevio_no_huge 00:20:28.818 ************************************ 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.819 ************************************ 00:20:28.819 START TEST nvmf_tls 00:20:28.819 ************************************ 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:28.819 * Looking for test storage... 00:20:28.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.819 --rc genhtml_branch_coverage=1 00:20:28.819 --rc genhtml_function_coverage=1 00:20:28.819 --rc genhtml_legend=1 00:20:28.819 --rc geninfo_all_blocks=1 00:20:28.819 --rc geninfo_unexecuted_blocks=1 00:20:28.819 00:20:28.819 ' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.819 --rc genhtml_branch_coverage=1 00:20:28.819 --rc genhtml_function_coverage=1 00:20:28.819 --rc genhtml_legend=1 00:20:28.819 --rc geninfo_all_blocks=1 00:20:28.819 --rc geninfo_unexecuted_blocks=1 00:20:28.819 00:20:28.819 ' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.819 --rc genhtml_branch_coverage=1 00:20:28.819 --rc genhtml_function_coverage=1 00:20:28.819 --rc genhtml_legend=1 00:20:28.819 --rc geninfo_all_blocks=1 00:20:28.819 --rc geninfo_unexecuted_blocks=1 00:20:28.819 00:20:28.819 ' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.819 --rc genhtml_branch_coverage=1 00:20:28.819 --rc genhtml_function_coverage=1 00:20:28.819 --rc genhtml_legend=1 00:20:28.819 --rc geninfo_all_blocks=1 00:20:28.819 --rc geninfo_unexecuted_blocks=1 00:20:28.819 00:20:28.819 ' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.819 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.820 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.954 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:36.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:36.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:36.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:36.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:20:36.955 00:20:36.955 --- 10.0.0.2 ping statistics --- 00:20:36.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.955 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:36.955 00:20:36.955 --- 10.0.0.1 ping statistics --- 00:20:36.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.955 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2064296 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2064296 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2064296 ']' 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.955 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.955 [2024-11-20 10:38:08.445011] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:20:36.955 [2024-11-20 10:38:08.445088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.955 [2024-11-20 10:38:08.547450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.955 [2024-11-20 10:38:08.597772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.955 [2024-11-20 10:38:08.597822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.955 [2024-11-20 10:38:08.597831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.955 [2024-11-20 10:38:08.597839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.956 [2024-11-20 10:38:08.597845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.956 [2024-11-20 10:38:08.598613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:36.956 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:37.217 true 00:20:37.217 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.217 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:37.478 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:37.478 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:37.478 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:37.738 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.738 10:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:37.999 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:37.999 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:37.999 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:37.999 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.999 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:38.259 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:38.259 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:38.259 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.259 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:38.519 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:38.519 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:38.519 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:38.519 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.519 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:38.780 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:38.780 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:38.780 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:39.040 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3FW9N7sFuh 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.FR7mBEWrH4 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3FW9N7sFuh 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.FR7mBEWrH4 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:39.300 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:39.560 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3FW9N7sFuh 00:20:39.560 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3FW9N7sFuh 00:20:39.560 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.820 [2024-11-20 10:38:12.024770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.820 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.080 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.080 [2024-11-20 10:38:12.361584] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.080 [2024-11-20 10:38:12.361794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.080 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.340 malloc0 00:20:40.340 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.340 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3FW9N7sFuh 00:20:40.599 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.858 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3FW9N7sFuh 00:20:50.853 Initializing NVMe Controllers 00:20:50.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.853 Initialization complete. Launching workers. 00:20:50.853 ======================================================== 00:20:50.853 Latency(us) 00:20:50.853 Device Information : IOPS MiB/s Average min max 00:20:50.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18841.28 73.60 3397.00 1137.10 3965.52 00:20:50.853 ======================================================== 00:20:50.853 Total : 18841.28 73.60 3397.00 1137.10 3965.52 00:20:50.853 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3FW9N7sFuh 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3FW9N7sFuh 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2067294 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2067294 /var/tmp/bdevperf.sock 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2067294 ']' 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.853 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.853 [2024-11-20 10:38:23.205986] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:20:50.853 [2024-11-20 10:38:23.206044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067294 ] 00:20:51.114 [2024-11-20 10:38:23.292149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.114 [2024-11-20 10:38:23.327191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.684 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.684 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.684 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3FW9N7sFuh 00:20:51.946 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.946 [2024-11-20 10:38:24.298504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.206 TLSTESTn1 00:20:52.206 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:52.206 Running I/O for 10 seconds... 00:20:54.526 3990.00 IOPS, 15.59 MiB/s [2024-11-20T09:38:27.839Z] 4231.00 IOPS, 16.53 MiB/s [2024-11-20T09:38:28.780Z] 4605.00 IOPS, 17.99 MiB/s [2024-11-20T09:38:29.719Z] 5014.00 IOPS, 19.59 MiB/s [2024-11-20T09:38:30.655Z] 5105.60 IOPS, 19.94 MiB/s [2024-11-20T09:38:31.608Z] 5134.67 IOPS, 20.06 MiB/s [2024-11-20T09:38:32.544Z] 5265.29 IOPS, 20.57 MiB/s [2024-11-20T09:38:33.925Z] 5389.75 IOPS, 21.05 MiB/s [2024-11-20T09:38:34.864Z] 5375.11 IOPS, 21.00 MiB/s [2024-11-20T09:38:34.864Z] 5486.00 IOPS, 21.43 MiB/s 00:21:02.488 Latency(us) 00:21:02.488 [2024-11-20T09:38:34.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.488 Verification LBA range: start 0x0 length 0x2000 00:21:02.488 TLSTESTn1 : 10.05 5473.23 21.38 0.00 0.00 23320.37 6116.69 45438.29 00:21:02.488 [2024-11-20T09:38:34.864Z] =================================================================================================================== 00:21:02.488 [2024-11-20T09:38:34.864Z] Total : 5473.23 21.38 0.00 0.00 23320.37 6116.69 45438.29 00:21:02.488 { 00:21:02.488 "results": [ 00:21:02.488 { 00:21:02.488 "job": "TLSTESTn1", 00:21:02.488 "core_mask": "0x4", 00:21:02.488 "workload": "verify", 00:21:02.488 "status": "finished", 00:21:02.488 "verify_range": { 00:21:02.488 "start": 0, 00:21:02.488 "length": 8192 00:21:02.488 }, 00:21:02.488 "queue_depth": 128, 00:21:02.488 "io_size": 4096, 00:21:02.488 "runtime": 10.046535, 00:21:02.488 "iops": 5473.230322693346, 00:21:02.488 "mibps": 21.379805948020884, 00:21:02.488 "io_failed": 0, 00:21:02.488 "io_timeout": 0, 00:21:02.488 "avg_latency_us": 23320.36684355696, 00:21:02.488 "min_latency_us": 6116.693333333334, 00:21:02.488 "max_latency_us": 45438.293333333335 00:21:02.488 } 00:21:02.488 ], 00:21:02.488 "core_count": 1 00:21:02.488 } 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2067294 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2067294 ']' 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2067294 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067294 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067294' 00:21:02.488 killing process with pid 2067294 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2067294 00:21:02.488 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.488 00:21:02.488 Latency(us) 00:21:02.488 [2024-11-20T09:38:34.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.488 [2024-11-20T09:38:34.864Z] =================================================================================================================== 00:21:02.488 [2024-11-20T09:38:34.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2067294 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FR7mBEWrH4 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FR7mBEWrH4 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FR7mBEWrH4 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FR7mBEWrH4 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.488 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2069503 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2069503 /var/tmp/bdevperf.sock 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2069503 ']' 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.489 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.489 [2024-11-20 10:38:34.802574] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:02.489 [2024-11-20 10:38:34.802650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069503 ] 00:21:02.749 [2024-11-20 10:38:34.884222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.749 [2024-11-20 10:38:34.912742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.319 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.319 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.319 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FR7mBEWrH4 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.580 [2024-11-20 10:38:35.883071] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.580 [2024-11-20 10:38:35.887579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.580 [2024-11-20 10:38:35.888204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224dbb0 (107): Transport endpoint is not connected 00:21:03.580 [2024-11-20 10:38:35.889199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224dbb0 (9): Bad file descriptor 00:21:03.580 [2024-11-20 10:38:35.890201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:03.580 [2024-11-20 10:38:35.890209] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.580 [2024-11-20 10:38:35.890215] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:03.580 [2024-11-20 10:38:35.890223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:03.580 request: 00:21:03.580 { 00:21:03.580 "name": "TLSTEST", 00:21:03.580 "trtype": "tcp", 00:21:03.580 "traddr": "10.0.0.2", 00:21:03.580 "adrfam": "ipv4", 00:21:03.580 "trsvcid": "4420", 00:21:03.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.580 "prchk_reftag": false, 00:21:03.580 "prchk_guard": false, 00:21:03.580 "hdgst": false, 00:21:03.580 "ddgst": false, 00:21:03.580 "psk": "key0", 00:21:03.580 "allow_unrecognized_csi": false, 00:21:03.580 "method": "bdev_nvme_attach_controller", 00:21:03.580 "req_id": 1 00:21:03.580 } 00:21:03.580 Got JSON-RPC error response 00:21:03.580 response: 00:21:03.580 { 00:21:03.580 "code": -5, 00:21:03.580 "message": "Input/output error" 00:21:03.580 } 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2069503 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2069503 ']' 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2069503 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.580 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069503 00:21:03.841 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:03.841 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:03.841 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069503' 00:21:03.841 killing process with pid 2069503 00:21:03.841 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2069503 00:21:03.841 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.841 00:21:03.841 Latency(us) 00:21:03.841 [2024-11-20T09:38:36.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.841 [2024-11-20T09:38:36.217Z] =================================================================================================================== 00:21:03.841 [2024-11-20T09:38:36.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.841 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2069503 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3FW9N7sFuh 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3FW9N7sFuh 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3FW9N7sFuh 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3FW9N7sFuh 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2069678 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2069678 /var/tmp/bdevperf.sock 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2069678 ']' 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.841 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.841 [2024-11-20 10:38:36.143421] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:03.841 [2024-11-20 10:38:36.143475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069678 ] 00:21:04.102 [2024-11-20 10:38:36.230156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.102 [2024-11-20 10:38:36.258116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.672 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.672 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:04.672 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3FW9N7sFuh 00:21:04.933 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:04.933 [2024-11-20 10:38:37.268693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.933 [2024-11-20 10:38:37.280196] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:04.933 [2024-11-20 10:38:37.280216] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:04.933 [2024-11-20 10:38:37.280235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:04.933 [2024-11-20 10:38:37.280990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x981bb0 (107): Transport endpoint is not connected 00:21:04.933 [2024-11-20 10:38:37.281986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x981bb0 (9): Bad file descriptor 00:21:04.933 [2024-11-20 10:38:37.282988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:04.933 [2024-11-20 10:38:37.282995] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:04.933 [2024-11-20 10:38:37.283001] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:04.933 [2024-11-20 10:38:37.283010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:04.933 request: 00:21:04.933 { 00:21:04.933 "name": "TLSTEST", 00:21:04.933 "trtype": "tcp", 00:21:04.933 "traddr": "10.0.0.2", 00:21:04.933 "adrfam": "ipv4", 00:21:04.933 "trsvcid": "4420", 00:21:04.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.933 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.933 "prchk_reftag": false, 00:21:04.933 "prchk_guard": false, 00:21:04.933 "hdgst": false, 00:21:04.933 "ddgst": false, 00:21:04.933 "psk": "key0", 00:21:04.933 "allow_unrecognized_csi": false, 00:21:04.933 "method": "bdev_nvme_attach_controller", 00:21:04.933 "req_id": 1 00:21:04.933 } 00:21:04.933 Got JSON-RPC error response 00:21:04.933 response: 00:21:04.933 { 00:21:04.933 "code": -5, 00:21:04.933 "message": "Input/output error" 00:21:04.933 } 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2069678 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2069678 ']' 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2069678 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069678 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069678' 00:21:05.194 killing process with pid 2069678 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2069678 00:21:05.194 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.194 00:21:05.194 Latency(us) 00:21:05.194 [2024-11-20T09:38:37.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.194 [2024-11-20T09:38:37.570Z] =================================================================================================================== 00:21:05.194 [2024-11-20T09:38:37.570Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2069678 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3FW9N7sFuh 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3FW9N7sFuh 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3FW9N7sFuh 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3FW9N7sFuh 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2070017 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2070017 /var/tmp/bdevperf.sock 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2070017 ']' 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.194 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.195 [2024-11-20 10:38:37.524800] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:05.195 [2024-11-20 10:38:37.524854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070017 ] 00:21:05.454 [2024-11-20 10:38:37.610355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.454 [2024-11-20 10:38:37.638384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.025 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.025 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.025 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3FW9N7sFuh 00:21:06.285 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:06.545 [2024-11-20 10:38:38.665099] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.545 [2024-11-20 10:38:38.673576] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.545 [2024-11-20 10:38:38.673593] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.545 [2024-11-20 10:38:38.673613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:06.545 [2024-11-20 10:38:38.674285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44bb0 (107): Transport endpoint is not connected 00:21:06.545 [2024-11-20 10:38:38.675281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f44bb0 (9): Bad file descriptor 00:21:06.545 [2024-11-20 10:38:38.676283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:06.545 [2024-11-20 10:38:38.676291] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:06.545 [2024-11-20 10:38:38.676297] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:06.545 [2024-11-20 10:38:38.676305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:06.545 request: 00:21:06.545 { 00:21:06.545 "name": "TLSTEST", 00:21:06.545 "trtype": "tcp", 00:21:06.545 "traddr": "10.0.0.2", 00:21:06.545 "adrfam": "ipv4", 00:21:06.545 "trsvcid": "4420", 00:21:06.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:06.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.545 "prchk_reftag": false, 00:21:06.545 "prchk_guard": false, 00:21:06.545 "hdgst": false, 00:21:06.545 "ddgst": false, 00:21:06.545 "psk": "key0", 00:21:06.546 "allow_unrecognized_csi": false, 00:21:06.546 "method": "bdev_nvme_attach_controller", 00:21:06.546 "req_id": 1 00:21:06.546 } 00:21:06.546 Got JSON-RPC error response 00:21:06.546 response: 00:21:06.546 { 00:21:06.546 "code": -5, 00:21:06.546 "message": "Input/output error" 00:21:06.546 } 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2070017 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2070017 ']' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2070017 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070017 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070017' 00:21:06.546 killing process with pid 2070017 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2070017 00:21:06.546 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.546 00:21:06.546 Latency(us) 00:21:06.546 [2024-11-20T09:38:38.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.546 [2024-11-20T09:38:38.922Z] =================================================================================================================== 00:21:06.546 [2024-11-20T09:38:38.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2070017 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2070361 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2070361 /var/tmp/bdevperf.sock 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2070361 ']' 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.546 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.807 [2024-11-20 10:38:38.926106] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:06.807 [2024-11-20 10:38:38.926166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070361 ] 00:21:06.807 [2024-11-20 10:38:39.009685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.807 [2024-11-20 10:38:39.038066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.377 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.377 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.377 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:07.637 [2024-11-20 10:38:39.867942] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:07.638 [2024-11-20 10:38:39.867971] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:07.638 request: 00:21:07.638 { 00:21:07.638 "name": "key0", 00:21:07.638 "path": "", 00:21:07.638 "method": "keyring_file_add_key", 00:21:07.638 "req_id": 1 00:21:07.638 } 00:21:07.638 Got JSON-RPC error response 00:21:07.638 response: 00:21:07.638 { 00:21:07.638 "code": -1, 00:21:07.638 "message": "Operation not permitted" 00:21:07.638 } 00:21:07.638 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.898 [2024-11-20 10:38:40.056501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.898 [2024-11-20 10:38:40.056531] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:07.898 request: 00:21:07.898 { 00:21:07.898 "name": "TLSTEST", 00:21:07.898 "trtype": "tcp", 00:21:07.898 "traddr": "10.0.0.2", 00:21:07.898 "adrfam": "ipv4", 00:21:07.898 "trsvcid": "4420", 00:21:07.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.898 "prchk_reftag": false, 00:21:07.898 "prchk_guard": false, 00:21:07.898 "hdgst": false, 00:21:07.898 "ddgst": false, 00:21:07.898 "psk": "key0", 00:21:07.898 "allow_unrecognized_csi": false, 00:21:07.898 "method": "bdev_nvme_attach_controller", 00:21:07.898 "req_id": 1 00:21:07.898 } 00:21:07.898 Got JSON-RPC error response 00:21:07.898 response: 00:21:07.898 { 00:21:07.898 "code": -126, 00:21:07.898 "message": "Required key not available" 00:21:07.898 } 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2070361 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2070361 ']' 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2070361 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070361 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070361' 00:21:07.898 killing process with pid 2070361 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2070361 00:21:07.898 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.898 00:21:07.898 Latency(us) 00:21:07.898 [2024-11-20T09:38:40.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.898 [2024-11-20T09:38:40.274Z] =================================================================================================================== 00:21:07.898 [2024-11-20T09:38:40.274Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2070361 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:07.898 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2064296 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2064296 ']' 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2064296 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.899 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2064296 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2064296' 00:21:08.159 killing process with pid 2064296 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2064296 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2064296 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gZYI21px8v 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gZYI21px8v 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2070710 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2070710 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2070710 ']' 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.159 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 [2024-11-20 10:38:40.530877] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:08.159 [2024-11-20 10:38:40.530936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.419 [2024-11-20 10:38:40.621139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.419 [2024-11-20 10:38:40.650528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.420 [2024-11-20 10:38:40.650555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.420 [2024-11-20 10:38:40.650561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.420 [2024-11-20 10:38:40.650566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.420 [2024-11-20 10:38:40.650570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.420 [2024-11-20 10:38:40.651003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gZYI21px8v 00:21:08.989 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.249 [2024-11-20 10:38:41.510608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.249 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.509 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.509 [2024-11-20 10:38:41.847436] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.509 [2024-11-20 10:38:41.847644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.509 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.769 malloc0 00:21:09.769 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.030 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:10.030 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZYI21px8v 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gZYI21px8v 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2071075 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2071075 /var/tmp/bdevperf.sock 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2071075 ']' 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.290 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 [2024-11-20 10:38:42.569519] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:10.290 [2024-11-20 10:38:42.569572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071075 ] 00:21:10.290 [2024-11-20 10:38:42.651459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.550 [2024-11-20 10:38:42.680486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.120 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.120 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.120 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:11.380 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.380 [2024-11-20 10:38:43.695128] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.640 TLSTESTn1 00:21:11.640 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:11.640 Running I/O for 10 seconds... 00:21:13.524 5331.00 IOPS, 20.82 MiB/s [2024-11-20T09:38:47.286Z] 5466.00 IOPS, 21.35 MiB/s [2024-11-20T09:38:48.225Z] 5298.00 IOPS, 20.70 MiB/s [2024-11-20T09:38:49.163Z] 5527.50 IOPS, 21.59 MiB/s [2024-11-20T09:38:50.100Z] 5680.40 IOPS, 22.19 MiB/s [2024-11-20T09:38:51.038Z] 5624.67 IOPS, 21.97 MiB/s [2024-11-20T09:38:51.978Z] 5593.29 IOPS, 21.85 MiB/s [2024-11-20T09:38:52.917Z] 5653.75 IOPS, 22.08 MiB/s [2024-11-20T09:38:54.299Z] 5650.89 IOPS, 22.07 MiB/s [2024-11-20T09:38:54.299Z] 5664.90 IOPS, 22.13 MiB/s 00:21:21.923 Latency(us) 00:21:21.923 [2024-11-20T09:38:54.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.923 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:21.923 Verification LBA range: start 0x0 length 0x2000 00:21:21.923 TLSTESTn1 : 10.01 5669.57 22.15 0.00 0.00 22546.63 5133.65 37355.52 00:21:21.923 [2024-11-20T09:38:54.299Z] =================================================================================================================== 00:21:21.923 [2024-11-20T09:38:54.299Z] Total : 5669.57 22.15 0.00 0.00 22546.63 5133.65 37355.52 00:21:21.923 { 00:21:21.923 "results": [ 00:21:21.923 { 00:21:21.923 "job": "TLSTESTn1", 00:21:21.923 "core_mask": "0x4", 00:21:21.923 "workload": "verify", 00:21:21.923 "status": "finished", 00:21:21.923 "verify_range": { 00:21:21.923 "start": 0, 00:21:21.923 "length": 8192 00:21:21.923 }, 00:21:21.923 "queue_depth": 128, 00:21:21.923 "io_size": 4096, 00:21:21.923 "runtime": 10.014161, 00:21:21.923 "iops": 5669.571320053672, 00:21:21.923 "mibps": 22.146762968959656, 00:21:21.923 "io_failed": 0, 00:21:21.923 "io_timeout": 0, 00:21:21.923 "avg_latency_us": 22546.625680334415, 00:21:21.923 "min_latency_us": 5133.653333333334, 00:21:21.923 "max_latency_us": 37355.52 00:21:21.923 } 00:21:21.923 ], 00:21:21.923 "core_count": 1 00:21:21.923 } 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2071075 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2071075 ']' 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2071075 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.923 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2071075 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2071075' 00:21:21.923 killing process with pid 2071075 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2071075 00:21:21.923 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.923 00:21:21.923 Latency(us) 00:21:21.923 [2024-11-20T09:38:54.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.923 [2024-11-20T09:38:54.299Z] =================================================================================================================== 00:21:21.923 [2024-11-20T09:38:54.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2071075 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gZYI21px8v 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZYI21px8v 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZYI21px8v 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZYI21px8v 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gZYI21px8v 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2073421 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2073421 /var/tmp/bdevperf.sock 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2073421 ']' 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.923 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.923 [2024-11-20 10:38:54.173398] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:21.923 [2024-11-20 10:38:54.173456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073421 ] 00:21:21.923 [2024-11-20 10:38:54.256126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.923 [2024-11-20 10:38:54.284961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.862 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.862 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:22.862 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:22.862 [2024-11-20 10:38:55.126932] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gZYI21px8v': 0100666 00:21:22.862 [2024-11-20 10:38:55.126952] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:22.862 request: 00:21:22.862 { 00:21:22.862 "name": "key0", 00:21:22.862 "path": "/tmp/tmp.gZYI21px8v", 00:21:22.862 "method": "keyring_file_add_key", 00:21:22.862 "req_id": 1 00:21:22.862 } 00:21:22.862 Got JSON-RPC error response 00:21:22.862 response: 00:21:22.862 { 00:21:22.862 "code": -1, 00:21:22.862 "message": "Operation not permitted" 00:21:22.862 } 00:21:22.862 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:23.121 [2024-11-20 10:38:55.311460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.121 [2024-11-20 10:38:55.311483] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:23.121 request: 00:21:23.121 { 00:21:23.121 "name": "TLSTEST", 00:21:23.121 "trtype": "tcp", 00:21:23.121 "traddr": "10.0.0.2", 00:21:23.121 "adrfam": "ipv4", 00:21:23.121 "trsvcid": "4420", 00:21:23.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.121 "prchk_reftag": false, 00:21:23.121 "prchk_guard": false, 00:21:23.121 "hdgst": false, 00:21:23.121 "ddgst": false, 00:21:23.121 "psk": "key0", 00:21:23.121 "allow_unrecognized_csi": false, 00:21:23.121 "method": "bdev_nvme_attach_controller", 00:21:23.121 "req_id": 1 00:21:23.121 } 00:21:23.121 Got JSON-RPC error response 00:21:23.121 response: 00:21:23.121 { 00:21:23.121 "code": -126, 00:21:23.121 "message": "Required key not available" 00:21:23.121 } 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2073421 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2073421 ']' 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2073421 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073421 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:23.121 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073421' 00:21:23.122 killing process with pid 2073421 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2073421 00:21:23.122 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.122 00:21:23.122 Latency(us) 00:21:23.122 [2024-11-20T09:38:55.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.122 [2024-11-20T09:38:55.498Z] =================================================================================================================== 00:21:23.122 [2024-11-20T09:38:55.498Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2073421 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2070710 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2070710 ']' 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2070710 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:23.122 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070710 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070710' 00:21:23.382 killing process with pid 2070710 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2070710 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2070710 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2073671 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2073671 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2073671 ']' 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.382 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.382 [2024-11-20 10:38:55.725446] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:23.382 [2024-11-20 10:38:55.725505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.642 [2024-11-20 10:38:55.815824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.642 [2024-11-20 10:38:55.845717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.642 [2024-11-20 10:38:55.845745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.642 [2024-11-20 10:38:55.845750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.642 [2024-11-20 10:38:55.845755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.642 [2024-11-20 10:38:55.845759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.642 [2024-11-20 10:38:55.846231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.212 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.212 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:24.212 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gZYI21px8v 00:21:24.213 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.472 [2024-11-20 10:38:56.705445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.472 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.731 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.731 [2024-11-20 10:38:57.026225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.731 [2024-11-20 10:38:57.026422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.731 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.990 malloc0 00:21:24.990 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:25.250 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:25.250 [2024-11-20 10:38:57.533347] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gZYI21px8v': 0100666 00:21:25.250 [2024-11-20 10:38:57.533368] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:25.250 request: 00:21:25.250 { 00:21:25.250 "name": "key0", 00:21:25.250 "path": "/tmp/tmp.gZYI21px8v", 00:21:25.250 "method": "keyring_file_add_key", 00:21:25.250 "req_id": 1 00:21:25.250 } 00:21:25.250 Got JSON-RPC error response 00:21:25.250 response: 00:21:25.250 { 00:21:25.250 "code": -1, 00:21:25.250 "message": "Operation not permitted" 00:21:25.250 } 00:21:25.250 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:25.510 [2024-11-20 10:38:57.701787] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:25.510 [2024-11-20 10:38:57.701815] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:25.510 request: 00:21:25.510 { 00:21:25.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.510 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.510 "psk": "key0", 00:21:25.510 "method": "nvmf_subsystem_add_host", 00:21:25.510 "req_id": 1 00:21:25.510 } 00:21:25.510 Got JSON-RPC error response 00:21:25.510 response: 00:21:25.510 { 00:21:25.510 "code": -32603, 00:21:25.510 "message": "Internal error" 00:21:25.510 } 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2073671 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2073671 ']' 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2073671 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073671 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073671' 00:21:25.510 killing process with pid 2073671 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2073671 00:21:25.510 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2073671 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gZYI21px8v 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2074142 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2074142 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2074142 ']' 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.769 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.769 [2024-11-20 10:38:57.967003] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:25.770 [2024-11-20 10:38:57.967063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.770 [2024-11-20 10:38:58.056301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.770 [2024-11-20 10:38:58.086346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.770 [2024-11-20 10:38:58.086373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.770 [2024-11-20 10:38:58.086381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.770 [2024-11-20 10:38:58.086386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.770 [2024-11-20 10:38:58.086390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.770 [2024-11-20 10:38:58.086829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gZYI21px8v 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:26.709 [2024-11-20 10:38:58.938535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.709 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.970 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.970 [2024-11-20 10:38:59.259320] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.970 [2024-11-20 10:38:59.259519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.970 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:27.229 malloc0 00:21:27.229 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:27.489 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:27.489 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2074504 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2074504 /var/tmp/bdevperf.sock 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2074504 ']' 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.749 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.749 [2024-11-20 10:38:59.999398] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:27.749 [2024-11-20 10:38:59.999452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074504 ] 00:21:27.749 [2024-11-20 10:39:00.085262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.749 [2024-11-20 10:39:00.114525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.690 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.690 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:28.690 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:28.690 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:28.950 [2024-11-20 10:39:01.081196] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.950 TLSTESTn1 00:21:28.950 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:29.210 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:29.210 "subsystems": [ 00:21:29.210 { 00:21:29.210 "subsystem": "keyring", 00:21:29.210 "config": [ 00:21:29.210 { 00:21:29.210 "method": "keyring_file_add_key", 00:21:29.210 "params": { 00:21:29.210 "name": "key0", 00:21:29.211 "path": "/tmp/tmp.gZYI21px8v" 00:21:29.211 } 00:21:29.211 } 00:21:29.211 ] 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "subsystem": "iobuf", 00:21:29.211 "config": [ 00:21:29.211 { 00:21:29.211 "method": "iobuf_set_options", 00:21:29.211 "params": { 00:21:29.211 "small_pool_count": 8192, 00:21:29.211 "large_pool_count": 1024, 00:21:29.211 "small_bufsize": 8192, 00:21:29.211 "large_bufsize": 135168, 00:21:29.211 "enable_numa": false 00:21:29.211 } 00:21:29.211 } 00:21:29.211 ] 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "subsystem": "sock", 00:21:29.211 "config": [ 00:21:29.211 { 00:21:29.211 "method": "sock_set_default_impl", 00:21:29.211 "params": { 00:21:29.211 "impl_name": "posix" 00:21:29.211 } 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "method": "sock_impl_set_options", 00:21:29.211 "params": { 00:21:29.211 "impl_name": "ssl", 00:21:29.211 "recv_buf_size": 4096, 00:21:29.211 "send_buf_size": 4096, 00:21:29.211 "enable_recv_pipe": true, 00:21:29.211 "enable_quickack": false, 00:21:29.211 "enable_placement_id": 0, 00:21:29.211 "enable_zerocopy_send_server": true, 00:21:29.211 "enable_zerocopy_send_client": false, 00:21:29.211 "zerocopy_threshold": 0, 00:21:29.211 "tls_version": 0, 00:21:29.211 "enable_ktls": false 00:21:29.211 } 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "method": "sock_impl_set_options", 00:21:29.211 "params": { 00:21:29.211 "impl_name": "posix", 00:21:29.211 "recv_buf_size": 2097152, 00:21:29.211 "send_buf_size": 2097152, 00:21:29.211 "enable_recv_pipe": true, 00:21:29.211 "enable_quickack": false, 00:21:29.211 "enable_placement_id": 0, 00:21:29.211 "enable_zerocopy_send_server": true, 00:21:29.211 "enable_zerocopy_send_client": false, 00:21:29.211 "zerocopy_threshold": 0, 00:21:29.211 "tls_version": 0, 00:21:29.211 "enable_ktls": false 00:21:29.211 } 00:21:29.211 } 00:21:29.211 ] 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "subsystem": "vmd", 00:21:29.211 "config": [] 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "subsystem": "accel", 00:21:29.211 "config": [ 00:21:29.211 { 00:21:29.211 "method": "accel_set_options", 00:21:29.211 "params": { 00:21:29.211 "small_cache_size": 128, 00:21:29.211 "large_cache_size": 16, 00:21:29.211 "task_count": 2048, 00:21:29.211 "sequence_count": 2048, 00:21:29.211 "buf_count": 2048 00:21:29.211 } 00:21:29.211 } 00:21:29.211 ] 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "subsystem": "bdev", 00:21:29.211 "config": [ 00:21:29.211 { 00:21:29.211 "method": "bdev_set_options", 00:21:29.211 "params": { 00:21:29.211 "bdev_io_pool_size": 65535, 00:21:29.211 "bdev_io_cache_size": 256, 00:21:29.211 "bdev_auto_examine": true, 00:21:29.211 "iobuf_small_cache_size": 128, 00:21:29.211 "iobuf_large_cache_size": 16 00:21:29.211 } 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "method": "bdev_raid_set_options", 00:21:29.211 "params": { 00:21:29.211 "process_window_size_kb": 1024, 00:21:29.211 "process_max_bandwidth_mb_sec": 0 00:21:29.211 } 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "method": "bdev_iscsi_set_options", 00:21:29.211 "params": { 00:21:29.211 "timeout_sec": 30 00:21:29.211 } 00:21:29.211 }, 00:21:29.211 { 00:21:29.211 "method": "bdev_nvme_set_options", 00:21:29.211 "params": { 00:21:29.211 "action_on_timeout": "none", 00:21:29.211 "timeout_us": 0, 00:21:29.211 "timeout_admin_us": 0, 00:21:29.211 "keep_alive_timeout_ms": 10000, 00:21:29.211 "arbitration_burst": 0, 00:21:29.211 "low_priority_weight": 0, 00:21:29.211 "medium_priority_weight": 0, 00:21:29.211 "high_priority_weight": 0, 00:21:29.211 "nvme_adminq_poll_period_us": 10000, 00:21:29.211 "nvme_ioq_poll_period_us": 0, 00:21:29.211 "io_queue_requests": 0, 00:21:29.211 "delay_cmd_submit": true, 00:21:29.211 "transport_retry_count": 4, 00:21:29.211 "bdev_retry_count": 3, 00:21:29.211 "transport_ack_timeout": 0, 00:21:29.211 "ctrlr_loss_timeout_sec": 0, 00:21:29.211 "reconnect_delay_sec": 0, 00:21:29.211 "fast_io_fail_timeout_sec": 0, 00:21:29.211 "disable_auto_failback": false, 00:21:29.211 "generate_uuids": false, 00:21:29.211 "transport_tos": 0, 00:21:29.211 "nvme_error_stat": false, 00:21:29.211 "rdma_srq_size": 0, 00:21:29.211 "io_path_stat": false, 00:21:29.211 "allow_accel_sequence": false, 00:21:29.211 "rdma_max_cq_size": 0, 00:21:29.211 "rdma_cm_event_timeout_ms": 0, 00:21:29.211 "dhchap_digests": [ 00:21:29.211 "sha256", 00:21:29.211 "sha384", 00:21:29.211 "sha512" 00:21:29.211 ], 00:21:29.211 "dhchap_dhgroups": [ 00:21:29.211 "null", 00:21:29.211 "ffdhe2048", 00:21:29.212 "ffdhe3072", 00:21:29.212 "ffdhe4096", 00:21:29.212 "ffdhe6144", 00:21:29.212 "ffdhe8192" 00:21:29.212 ] 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "bdev_nvme_set_hotplug", 00:21:29.212 "params": { 00:21:29.212 "period_us": 100000, 00:21:29.212 "enable": false 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "bdev_malloc_create", 00:21:29.212 "params": { 00:21:29.212 "name": "malloc0", 00:21:29.212 "num_blocks": 8192, 00:21:29.212 "block_size": 4096, 00:21:29.212 "physical_block_size": 4096, 00:21:29.212 "uuid": "642d3e29-55f8-43c2-a1cf-418be3398606", 00:21:29.212 "optimal_io_boundary": 0, 00:21:29.212 "md_size": 0, 00:21:29.212 "dif_type": 0, 00:21:29.212 "dif_is_head_of_md": false, 00:21:29.212 "dif_pi_format": 0 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "bdev_wait_for_examine" 00:21:29.212 } 00:21:29.212 ] 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "subsystem": "nbd", 00:21:29.212 "config": [] 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "subsystem": "scheduler", 00:21:29.212 "config": [ 00:21:29.212 { 00:21:29.212 "method": "framework_set_scheduler", 00:21:29.212 "params": { 00:21:29.212 "name": "static" 00:21:29.212 } 00:21:29.212 } 00:21:29.212 ] 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "subsystem": "nvmf", 00:21:29.212 "config": [ 00:21:29.212 { 00:21:29.212 "method": "nvmf_set_config", 00:21:29.212 "params": { 00:21:29.212 "discovery_filter": "match_any", 00:21:29.212 "admin_cmd_passthru": { 00:21:29.212 "identify_ctrlr": false 00:21:29.212 }, 00:21:29.212 "dhchap_digests": [ 00:21:29.212 "sha256", 00:21:29.212 "sha384", 00:21:29.212 "sha512" 00:21:29.212 ], 00:21:29.212 "dhchap_dhgroups": [ 00:21:29.212 "null", 00:21:29.212 "ffdhe2048", 00:21:29.212 "ffdhe3072", 00:21:29.212 "ffdhe4096", 00:21:29.212 "ffdhe6144", 00:21:29.212 "ffdhe8192" 00:21:29.212 ] 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_set_max_subsystems", 00:21:29.212 "params": { 00:21:29.212 "max_subsystems": 1024 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_set_crdt", 00:21:29.212 "params": { 00:21:29.212 "crdt1": 0, 00:21:29.212 "crdt2": 0, 00:21:29.212 "crdt3": 0 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_create_transport", 00:21:29.212 "params": { 00:21:29.212 "trtype": "TCP", 00:21:29.212 "max_queue_depth": 128, 00:21:29.212 "max_io_qpairs_per_ctrlr": 127, 00:21:29.212 "in_capsule_data_size": 4096, 00:21:29.212 "max_io_size": 131072, 00:21:29.212 "io_unit_size": 131072, 00:21:29.212 "max_aq_depth": 128, 00:21:29.212 "num_shared_buffers": 511, 00:21:29.212 "buf_cache_size": 4294967295, 00:21:29.212 "dif_insert_or_strip": false, 00:21:29.212 "zcopy": false, 00:21:29.212 "c2h_success": false, 00:21:29.212 "sock_priority": 0, 00:21:29.212 "abort_timeout_sec": 1, 00:21:29.212 "ack_timeout": 0, 00:21:29.212 "data_wr_pool_size": 0 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_create_subsystem", 00:21:29.212 "params": { 00:21:29.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.212 "allow_any_host": false, 00:21:29.212 "serial_number": "SPDK00000000000001", 00:21:29.212 "model_number": "SPDK bdev Controller", 00:21:29.212 "max_namespaces": 10, 00:21:29.212 "min_cntlid": 1, 00:21:29.212 "max_cntlid": 65519, 00:21:29.212 "ana_reporting": false 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_subsystem_add_host", 00:21:29.212 "params": { 00:21:29.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.212 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.212 "psk": "key0" 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.212 "method": "nvmf_subsystem_add_ns", 00:21:29.212 "params": { 00:21:29.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.212 "namespace": { 00:21:29.212 "nsid": 1, 00:21:29.212 "bdev_name": "malloc0", 00:21:29.212 "nguid": "642D3E2955F843C2A1CF418BE3398606", 00:21:29.212 "uuid": "642d3e29-55f8-43c2-a1cf-418be3398606", 00:21:29.212 "no_auto_visible": false 00:21:29.212 } 00:21:29.212 } 00:21:29.212 }, 00:21:29.212 { 00:21:29.213 "method": "nvmf_subsystem_add_listener", 00:21:29.213 "params": { 00:21:29.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.213 "listen_address": { 00:21:29.213 "trtype": "TCP", 00:21:29.213 "adrfam": "IPv4", 00:21:29.213 "traddr": "10.0.0.2", 00:21:29.213 "trsvcid": "4420" 00:21:29.213 }, 00:21:29.213 "secure_channel": true 00:21:29.213 } 00:21:29.213 } 00:21:29.213 ] 00:21:29.213 } 00:21:29.213 ] 00:21:29.213 }' 00:21:29.213 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:29.473 "subsystems": [ 00:21:29.473 { 00:21:29.473 "subsystem": "keyring", 00:21:29.473 "config": [ 00:21:29.473 { 00:21:29.473 "method": "keyring_file_add_key", 00:21:29.473 "params": { 00:21:29.473 "name": "key0", 00:21:29.473 "path": "/tmp/tmp.gZYI21px8v" 00:21:29.473 } 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "iobuf", 00:21:29.473 "config": [ 00:21:29.473 { 00:21:29.473 "method": "iobuf_set_options", 00:21:29.473 "params": { 00:21:29.473 "small_pool_count": 8192, 00:21:29.473 "large_pool_count": 1024, 00:21:29.473 "small_bufsize": 8192, 00:21:29.473 "large_bufsize": 135168, 00:21:29.473 "enable_numa": false 00:21:29.473 } 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "sock", 00:21:29.473 "config": [ 00:21:29.473 { 00:21:29.473 "method": "sock_set_default_impl", 00:21:29.473 "params": { 00:21:29.473 "impl_name": "posix" 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "sock_impl_set_options", 00:21:29.473 "params": { 00:21:29.473 "impl_name": "ssl", 00:21:29.473 "recv_buf_size": 4096, 00:21:29.473 "send_buf_size": 4096, 00:21:29.473 "enable_recv_pipe": true, 00:21:29.473 "enable_quickack": false, 00:21:29.473 "enable_placement_id": 0, 00:21:29.473 "enable_zerocopy_send_server": true, 00:21:29.473 "enable_zerocopy_send_client": false, 00:21:29.473 "zerocopy_threshold": 0, 00:21:29.473 "tls_version": 0, 00:21:29.473 "enable_ktls": false 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "sock_impl_set_options", 00:21:29.473 "params": { 00:21:29.473 "impl_name": "posix", 00:21:29.473 "recv_buf_size": 2097152, 00:21:29.473 "send_buf_size": 2097152, 00:21:29.473 "enable_recv_pipe": true, 00:21:29.473 "enable_quickack": false, 00:21:29.473 "enable_placement_id": 0, 00:21:29.473 "enable_zerocopy_send_server": true, 00:21:29.473 "enable_zerocopy_send_client": false, 00:21:29.473 "zerocopy_threshold": 0, 00:21:29.473 "tls_version": 0, 00:21:29.473 "enable_ktls": false 00:21:29.473 } 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "vmd", 00:21:29.473 "config": [] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "accel", 00:21:29.473 "config": [ 00:21:29.473 { 00:21:29.473 "method": "accel_set_options", 00:21:29.473 "params": { 00:21:29.473 "small_cache_size": 128, 00:21:29.473 "large_cache_size": 16, 00:21:29.473 "task_count": 2048, 00:21:29.473 "sequence_count": 2048, 00:21:29.473 "buf_count": 2048 00:21:29.473 } 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "bdev", 00:21:29.473 "config": [ 00:21:29.473 { 00:21:29.473 "method": "bdev_set_options", 00:21:29.473 "params": { 00:21:29.473 "bdev_io_pool_size": 65535, 00:21:29.473 "bdev_io_cache_size": 256, 00:21:29.473 "bdev_auto_examine": true, 00:21:29.473 "iobuf_small_cache_size": 128, 00:21:29.473 "iobuf_large_cache_size": 16 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_raid_set_options", 00:21:29.473 "params": { 00:21:29.473 "process_window_size_kb": 1024, 00:21:29.473 "process_max_bandwidth_mb_sec": 0 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_iscsi_set_options", 00:21:29.473 "params": { 00:21:29.473 "timeout_sec": 30 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_nvme_set_options", 00:21:29.473 "params": { 00:21:29.473 "action_on_timeout": "none", 00:21:29.473 "timeout_us": 0, 00:21:29.473 "timeout_admin_us": 0, 00:21:29.473 "keep_alive_timeout_ms": 10000, 00:21:29.473 "arbitration_burst": 0, 00:21:29.473 "low_priority_weight": 0, 00:21:29.473 "medium_priority_weight": 0, 00:21:29.473 "high_priority_weight": 0, 00:21:29.473 "nvme_adminq_poll_period_us": 10000, 00:21:29.473 "nvme_ioq_poll_period_us": 0, 00:21:29.473 "io_queue_requests": 512, 00:21:29.473 "delay_cmd_submit": true, 00:21:29.473 "transport_retry_count": 4, 00:21:29.473 "bdev_retry_count": 3, 00:21:29.473 "transport_ack_timeout": 0, 00:21:29.473 "ctrlr_loss_timeout_sec": 0, 00:21:29.473 "reconnect_delay_sec": 0, 00:21:29.473 "fast_io_fail_timeout_sec": 0, 00:21:29.473 "disable_auto_failback": false, 00:21:29.473 "generate_uuids": false, 00:21:29.473 "transport_tos": 0, 00:21:29.473 "nvme_error_stat": false, 00:21:29.473 "rdma_srq_size": 0, 00:21:29.473 "io_path_stat": false, 00:21:29.473 "allow_accel_sequence": false, 00:21:29.473 "rdma_max_cq_size": 0, 00:21:29.473 "rdma_cm_event_timeout_ms": 0, 00:21:29.473 "dhchap_digests": [ 00:21:29.473 "sha256", 00:21:29.473 "sha384", 00:21:29.473 "sha512" 00:21:29.473 ], 00:21:29.473 "dhchap_dhgroups": [ 00:21:29.473 "null", 00:21:29.473 "ffdhe2048", 00:21:29.473 "ffdhe3072", 00:21:29.473 "ffdhe4096", 00:21:29.473 "ffdhe6144", 00:21:29.473 "ffdhe8192" 00:21:29.473 ] 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_nvme_attach_controller", 00:21:29.473 "params": { 00:21:29.473 "name": "TLSTEST", 00:21:29.473 "trtype": "TCP", 00:21:29.473 "adrfam": "IPv4", 00:21:29.473 "traddr": "10.0.0.2", 00:21:29.473 "trsvcid": "4420", 00:21:29.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.473 "prchk_reftag": false, 00:21:29.473 "prchk_guard": false, 00:21:29.473 "ctrlr_loss_timeout_sec": 0, 00:21:29.473 "reconnect_delay_sec": 0, 00:21:29.473 "fast_io_fail_timeout_sec": 0, 00:21:29.473 "psk": "key0", 00:21:29.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.473 "hdgst": false, 00:21:29.473 "ddgst": false, 00:21:29.473 "multipath": "multipath" 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_nvme_set_hotplug", 00:21:29.473 "params": { 00:21:29.473 "period_us": 100000, 00:21:29.473 "enable": false 00:21:29.473 } 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "method": "bdev_wait_for_examine" 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }, 00:21:29.473 { 00:21:29.473 "subsystem": "nbd", 00:21:29.473 "config": [] 00:21:29.473 } 00:21:29.473 ] 00:21:29.473 }' 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2074504 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2074504 ']' 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2074504 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074504 00:21:29.473 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:29.474 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:29.474 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074504' 00:21:29.474 killing process with pid 2074504 00:21:29.474 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2074504 00:21:29.474 Received shutdown signal, test time was about 10.000000 seconds 00:21:29.474 00:21:29.474 Latency(us) 00:21:29.474 [2024-11-20T09:39:01.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.474 [2024-11-20T09:39:01.850Z] =================================================================================================================== 00:21:29.474 [2024-11-20T09:39:01.850Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:29.474 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2074504 00:21:29.474 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2074142 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2074142 ']' 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2074142 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074142 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074142' 00:21:29.735 killing process with pid 2074142 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2074142 00:21:29.735 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2074142 00:21:29.735 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:29.735 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.735 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.735 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.735 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:29.735 "subsystems": [ 00:21:29.735 { 00:21:29.735 "subsystem": "keyring", 00:21:29.735 "config": [ 00:21:29.735 { 00:21:29.735 "method": "keyring_file_add_key", 00:21:29.735 "params": { 00:21:29.735 "name": "key0", 00:21:29.735 "path": "/tmp/tmp.gZYI21px8v" 00:21:29.735 } 00:21:29.735 } 00:21:29.735 ] 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "subsystem": "iobuf", 00:21:29.735 "config": [ 00:21:29.735 { 00:21:29.735 "method": "iobuf_set_options", 00:21:29.735 "params": { 00:21:29.735 "small_pool_count": 8192, 00:21:29.735 "large_pool_count": 1024, 00:21:29.735 "small_bufsize": 8192, 00:21:29.735 "large_bufsize": 135168, 00:21:29.735 "enable_numa": false 00:21:29.735 } 00:21:29.735 } 00:21:29.735 ] 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "subsystem": "sock", 00:21:29.735 "config": [ 00:21:29.735 { 00:21:29.735 "method": "sock_set_default_impl", 00:21:29.735 "params": { 00:21:29.735 "impl_name": "posix" 00:21:29.735 } 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "method": "sock_impl_set_options", 00:21:29.735 "params": { 00:21:29.735 "impl_name": "ssl", 00:21:29.735 "recv_buf_size": 4096, 00:21:29.735 "send_buf_size": 4096, 00:21:29.735 "enable_recv_pipe": true, 00:21:29.735 "enable_quickack": false, 00:21:29.735 "enable_placement_id": 0, 00:21:29.735 "enable_zerocopy_send_server": true, 00:21:29.735 "enable_zerocopy_send_client": false, 00:21:29.735 "zerocopy_threshold": 0, 00:21:29.735 "tls_version": 0, 00:21:29.735 "enable_ktls": false 00:21:29.735 } 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "method": "sock_impl_set_options", 00:21:29.735 "params": { 00:21:29.735 "impl_name": "posix", 00:21:29.735 "recv_buf_size": 2097152, 00:21:29.735 "send_buf_size": 2097152, 00:21:29.735 "enable_recv_pipe": true, 00:21:29.735 "enable_quickack": false, 00:21:29.735 "enable_placement_id": 0, 00:21:29.735 "enable_zerocopy_send_server": true, 00:21:29.735 "enable_zerocopy_send_client": false, 00:21:29.735 "zerocopy_threshold": 0, 00:21:29.735 "tls_version": 0, 00:21:29.735 "enable_ktls": false 00:21:29.735 } 00:21:29.735 } 00:21:29.735 ] 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "subsystem": "vmd", 00:21:29.735 "config": [] 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "subsystem": "accel", 00:21:29.735 "config": [ 00:21:29.735 { 00:21:29.735 "method": "accel_set_options", 00:21:29.735 "params": { 00:21:29.735 "small_cache_size": 128, 00:21:29.735 "large_cache_size": 16, 00:21:29.735 "task_count": 2048, 00:21:29.735 "sequence_count": 2048, 00:21:29.735 "buf_count": 2048 00:21:29.735 } 00:21:29.735 } 00:21:29.735 ] 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "subsystem": "bdev", 00:21:29.735 "config": [ 00:21:29.735 { 00:21:29.735 "method": "bdev_set_options", 00:21:29.735 "params": { 00:21:29.735 "bdev_io_pool_size": 65535, 00:21:29.735 "bdev_io_cache_size": 256, 00:21:29.735 "bdev_auto_examine": true, 00:21:29.735 "iobuf_small_cache_size": 128, 00:21:29.735 "iobuf_large_cache_size": 16 00:21:29.735 } 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "method": "bdev_raid_set_options", 00:21:29.735 "params": { 00:21:29.735 "process_window_size_kb": 1024, 00:21:29.735 "process_max_bandwidth_mb_sec": 0 00:21:29.735 } 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "method": "bdev_iscsi_set_options", 00:21:29.735 "params": { 00:21:29.735 "timeout_sec": 30 00:21:29.735 } 00:21:29.735 }, 00:21:29.735 { 00:21:29.735 "method": "bdev_nvme_set_options", 00:21:29.735 "params": { 00:21:29.735 "action_on_timeout": "none", 00:21:29.735 "timeout_us": 0, 00:21:29.735 "timeout_admin_us": 0, 00:21:29.735 "keep_alive_timeout_ms": 10000, 00:21:29.735 "arbitration_burst": 0, 00:21:29.735 "low_priority_weight": 0, 00:21:29.735 "medium_priority_weight": 0, 00:21:29.735 "high_priority_weight": 0, 00:21:29.736 "nvme_adminq_poll_period_us": 10000, 00:21:29.736 "nvme_ioq_poll_period_us": 0, 00:21:29.736 "io_queue_requests": 0, 00:21:29.736 "delay_cmd_submit": true, 00:21:29.736 "transport_retry_count": 4, 00:21:29.736 "bdev_retry_count": 3, 00:21:29.736 "transport_ack_timeout": 0, 00:21:29.736 "ctrlr_loss_timeout_sec": 0, 00:21:29.736 "reconnect_delay_sec": 0, 00:21:29.736 "fast_io_fail_timeout_sec": 0, 00:21:29.736 "disable_auto_failback": false, 00:21:29.736 "generate_uuids": false, 00:21:29.736 "transport_tos": 0, 00:21:29.736 "nvme_error_stat": false, 00:21:29.736 "rdma_srq_size": 0, 00:21:29.736 "io_path_stat": false, 00:21:29.736 "allow_accel_sequence": false, 00:21:29.736 "rdma_max_cq_size": 0, 00:21:29.736 "rdma_cm_event_timeout_ms": 0, 00:21:29.736 "dhchap_digests": [ 00:21:29.736 "sha256", 00:21:29.736 "sha384", 00:21:29.736 "sha512" 00:21:29.736 ], 00:21:29.736 "dhchap_dhgroups": [ 00:21:29.736 "null", 00:21:29.736 "ffdhe2048", 00:21:29.736 "ffdhe3072", 00:21:29.736 "ffdhe4096", 00:21:29.736 "ffdhe6144", 00:21:29.736 "ffdhe8192" 00:21:29.736 ] 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "bdev_nvme_set_hotplug", 00:21:29.736 "params": { 00:21:29.736 "period_us": 100000, 00:21:29.736 "enable": false 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "bdev_malloc_create", 00:21:29.736 "params": { 00:21:29.736 "name": "malloc0", 00:21:29.736 "num_blocks": 8192, 00:21:29.736 "block_size": 4096, 00:21:29.736 "physical_block_size": 4096, 00:21:29.736 "uuid": "642d3e29-55f8-43c2-a1cf-418be3398606", 00:21:29.736 "optimal_io_boundary": 0, 00:21:29.736 "md_size": 0, 00:21:29.736 "dif_type": 0, 00:21:29.736 "dif_is_head_of_md": false, 00:21:29.736 "dif_pi_format": 0 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "bdev_wait_for_examine" 00:21:29.736 } 00:21:29.736 ] 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "subsystem": "nbd", 00:21:29.736 "config": [] 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "subsystem": "scheduler", 00:21:29.736 "config": [ 00:21:29.736 { 00:21:29.736 "method": "framework_set_scheduler", 00:21:29.736 "params": { 00:21:29.736 "name": "static" 00:21:29.736 } 00:21:29.736 } 00:21:29.736 ] 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "subsystem": "nvmf", 00:21:29.736 "config": [ 00:21:29.736 { 00:21:29.736 "method": "nvmf_set_config", 00:21:29.736 "params": { 00:21:29.736 "discovery_filter": "match_any", 00:21:29.736 "admin_cmd_passthru": { 00:21:29.736 "identify_ctrlr": false 00:21:29.736 }, 00:21:29.736 "dhchap_digests": [ 00:21:29.736 "sha256", 00:21:29.736 "sha384", 00:21:29.736 "sha512" 00:21:29.736 ], 00:21:29.736 "dhchap_dhgroups": [ 00:21:29.736 "null", 00:21:29.736 "ffdhe2048", 00:21:29.736 "ffdhe3072", 00:21:29.736 "ffdhe4096", 00:21:29.736 "ffdhe6144", 00:21:29.736 "ffdhe8192" 00:21:29.736 ] 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_set_max_subsystems", 00:21:29.736 "params": { 00:21:29.736 "max_subsystems": 1024 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_set_crdt", 00:21:29.736 "params": { 00:21:29.736 "crdt1": 0, 00:21:29.736 "crdt2": 0, 00:21:29.736 "crdt3": 0 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_create_transport", 00:21:29.736 "params": { 00:21:29.736 "trtype": "TCP", 00:21:29.736 "max_queue_depth": 128, 00:21:29.736 "max_io_qpairs_per_ctrlr": 127, 00:21:29.736 "in_capsule_data_size": 4096, 00:21:29.736 "max_io_size": 131072, 00:21:29.736 "io_unit_size": 131072, 00:21:29.736 "max_aq_depth": 128, 00:21:29.736 "num_shared_buffers": 511, 00:21:29.736 "buf_cache_size": 4294967295, 00:21:29.736 "dif_insert_or_strip": false, 00:21:29.736 "zcopy": false, 00:21:29.736 "c2h_success": false, 00:21:29.736 "sock_priority": 0, 00:21:29.736 "abort_timeout_sec": 1, 00:21:29.736 "ack_timeout": 0, 00:21:29.736 "data_wr_pool_size": 0 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_create_subsystem", 00:21:29.736 "params": { 00:21:29.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.736 "allow_any_host": false, 00:21:29.736 "serial_number": "SPDK00000000000001", 00:21:29.736 "model_number": "SPDK bdev Controller", 00:21:29.736 "max_namespaces": 10, 00:21:29.736 "min_cntlid": 1, 00:21:29.736 "max_cntlid": 65519, 00:21:29.736 "ana_reporting": false 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_subsystem_add_host", 00:21:29.736 "params": { 00:21:29.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.736 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.736 "psk": "key0" 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_subsystem_add_ns", 00:21:29.736 "params": { 00:21:29.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.736 "namespace": { 00:21:29.736 "nsid": 1, 00:21:29.736 "bdev_name": "malloc0", 00:21:29.736 "nguid": "642D3E2955F843C2A1CF418BE3398606", 00:21:29.736 "uuid": "642d3e29-55f8-43c2-a1cf-418be3398606", 00:21:29.736 "no_auto_visible": false 00:21:29.736 } 00:21:29.736 } 00:21:29.736 }, 00:21:29.736 { 00:21:29.736 "method": "nvmf_subsystem_add_listener", 00:21:29.736 "params": { 00:21:29.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.736 "listen_address": { 00:21:29.736 "trtype": "TCP", 00:21:29.736 "adrfam": "IPv4", 00:21:29.736 "traddr": "10.0.0.2", 00:21:29.736 "trsvcid": "4420" 00:21:29.736 }, 00:21:29.736 "secure_channel": true 00:21:29.736 } 00:21:29.736 } 00:21:29.736 ] 00:21:29.736 } 00:21:29.736 ] 00:21:29.736 }' 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2074971 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2074971 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2074971 ']' 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.736 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.736 [2024-11-20 10:39:02.078251] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:29.736 [2024-11-20 10:39:02.078330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.997 [2024-11-20 10:39:02.170734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.997 [2024-11-20 10:39:02.207918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.997 [2024-11-20 10:39:02.207960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.997 [2024-11-20 10:39:02.207966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.997 [2024-11-20 10:39:02.207971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.997 [2024-11-20 10:39:02.207975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.997 [2024-11-20 10:39:02.208467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.258 [2024-11-20 10:39:02.401081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.258 [2024-11-20 10:39:02.433107] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.258 [2024-11-20 10:39:02.433317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.518 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.518 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.518 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.518 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.518 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2075319 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2075319 /var/tmp/bdevperf.sock 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2075319 ']' 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.779 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:30.779 "subsystems": [ 00:21:30.779 { 00:21:30.779 "subsystem": "keyring", 00:21:30.779 "config": [ 00:21:30.779 { 00:21:30.779 "method": "keyring_file_add_key", 00:21:30.779 "params": { 00:21:30.779 "name": "key0", 00:21:30.779 "path": "/tmp/tmp.gZYI21px8v" 00:21:30.779 } 00:21:30.779 } 00:21:30.779 ] 00:21:30.779 }, 00:21:30.779 { 00:21:30.779 "subsystem": "iobuf", 00:21:30.779 "config": [ 00:21:30.779 { 00:21:30.779 "method": "iobuf_set_options", 00:21:30.779 "params": { 00:21:30.779 "small_pool_count": 8192, 00:21:30.779 "large_pool_count": 1024, 00:21:30.779 "small_bufsize": 8192, 00:21:30.779 "large_bufsize": 135168, 00:21:30.779 "enable_numa": false 00:21:30.779 } 00:21:30.779 } 00:21:30.779 ] 00:21:30.779 }, 00:21:30.779 { 00:21:30.779 "subsystem": "sock", 00:21:30.779 "config": [ 00:21:30.779 { 00:21:30.779 "method": "sock_set_default_impl", 00:21:30.779 "params": { 00:21:30.779 "impl_name": "posix" 00:21:30.779 } 00:21:30.779 }, 00:21:30.779 { 00:21:30.779 "method": "sock_impl_set_options", 00:21:30.779 "params": { 00:21:30.779 "impl_name": "ssl", 00:21:30.779 "recv_buf_size": 4096, 00:21:30.779 "send_buf_size": 4096, 00:21:30.779 "enable_recv_pipe": true, 00:21:30.779 "enable_quickack": false, 00:21:30.779 "enable_placement_id": 0, 00:21:30.779 "enable_zerocopy_send_server": true, 00:21:30.779 "enable_zerocopy_send_client": false, 00:21:30.780 "zerocopy_threshold": 0, 00:21:30.780 "tls_version": 0, 00:21:30.780 "enable_ktls": false 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "sock_impl_set_options", 00:21:30.780 "params": { 00:21:30.780 "impl_name": "posix", 00:21:30.780 "recv_buf_size": 2097152, 00:21:30.780 "send_buf_size": 2097152, 00:21:30.780 "enable_recv_pipe": true, 00:21:30.780 "enable_quickack": false, 00:21:30.780 "enable_placement_id": 0, 00:21:30.780 "enable_zerocopy_send_server": true, 00:21:30.780 "enable_zerocopy_send_client": false, 00:21:30.780 "zerocopy_threshold": 0, 00:21:30.780 "tls_version": 0, 00:21:30.780 "enable_ktls": false 00:21:30.780 } 00:21:30.780 } 00:21:30.780 ] 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "subsystem": "vmd", 00:21:30.780 "config": [] 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "subsystem": "accel", 00:21:30.780 "config": [ 00:21:30.780 { 00:21:30.780 "method": "accel_set_options", 00:21:30.780 "params": { 00:21:30.780 "small_cache_size": 128, 00:21:30.780 "large_cache_size": 16, 00:21:30.780 "task_count": 2048, 00:21:30.780 "sequence_count": 2048, 00:21:30.780 "buf_count": 2048 00:21:30.780 } 00:21:30.780 } 00:21:30.780 ] 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "subsystem": "bdev", 00:21:30.780 "config": [ 00:21:30.780 { 00:21:30.780 "method": "bdev_set_options", 00:21:30.780 "params": { 00:21:30.780 "bdev_io_pool_size": 65535, 00:21:30.780 "bdev_io_cache_size": 256, 00:21:30.780 "bdev_auto_examine": true, 00:21:30.780 "iobuf_small_cache_size": 128, 00:21:30.780 "iobuf_large_cache_size": 16 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_raid_set_options", 00:21:30.780 "params": { 00:21:30.780 "process_window_size_kb": 1024, 00:21:30.780 "process_max_bandwidth_mb_sec": 0 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_iscsi_set_options", 00:21:30.780 "params": { 00:21:30.780 "timeout_sec": 30 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_nvme_set_options", 00:21:30.780 "params": { 00:21:30.780 "action_on_timeout": "none", 00:21:30.780 "timeout_us": 0, 00:21:30.780 "timeout_admin_us": 0, 00:21:30.780 "keep_alive_timeout_ms": 10000, 00:21:30.780 "arbitration_burst": 0, 00:21:30.780 "low_priority_weight": 0, 00:21:30.780 "medium_priority_weight": 0, 00:21:30.780 "high_priority_weight": 0, 00:21:30.780 "nvme_adminq_poll_period_us": 10000, 00:21:30.780 "nvme_ioq_poll_period_us": 0, 00:21:30.780 "io_queue_requests": 512, 00:21:30.780 "delay_cmd_submit": true, 00:21:30.780 "transport_retry_count": 4, 00:21:30.780 "bdev_retry_count": 3, 00:21:30.780 "transport_ack_timeout": 0, 00:21:30.780 "ctrlr_loss_timeout_sec": 0, 00:21:30.780 "reconnect_delay_sec": 0, 00:21:30.780 "fast_io_fail_timeout_sec": 0, 00:21:30.780 "disable_auto_failback": false, 00:21:30.780 "generate_uuids": false, 00:21:30.780 "transport_tos": 0, 00:21:30.780 "nvme_error_stat": false, 00:21:30.780 "rdma_srq_size": 0, 00:21:30.780 "io_path_stat": false, 00:21:30.780 "allow_accel_sequence": false, 00:21:30.780 "rdma_max_cq_size": 0, 00:21:30.780 "rdma_cm_event_timeout_ms": 0, 00:21:30.780 "dhchap_digests": [ 00:21:30.780 "sha256", 00:21:30.780 "sha384", 00:21:30.780 "sha512" 00:21:30.780 ], 00:21:30.780 "dhchap_dhgroups": [ 00:21:30.780 "null", 00:21:30.780 "ffdhe2048", 00:21:30.780 "ffdhe3072", 00:21:30.780 "ffdhe4096", 00:21:30.780 "ffdhe6144", 00:21:30.780 "ffdhe8192" 00:21:30.780 ] 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_nvme_attach_controller", 00:21:30.780 "params": { 00:21:30.780 "name": "TLSTEST", 00:21:30.780 "trtype": "TCP", 00:21:30.780 "adrfam": "IPv4", 00:21:30.780 "traddr": "10.0.0.2", 00:21:30.780 "trsvcid": "4420", 00:21:30.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.780 "prchk_reftag": false, 00:21:30.780 "prchk_guard": false, 00:21:30.780 "ctrlr_loss_timeout_sec": 0, 00:21:30.780 "reconnect_delay_sec": 0, 00:21:30.780 "fast_io_fail_timeout_sec": 0, 00:21:30.780 "psk": "key0", 00:21:30.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.780 "hdgst": false, 00:21:30.780 "ddgst": false, 00:21:30.780 "multipath": "multipath" 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_nvme_set_hotplug", 00:21:30.780 "params": { 00:21:30.780 "period_us": 100000, 00:21:30.780 "enable": false 00:21:30.780 } 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "method": "bdev_wait_for_examine" 00:21:30.780 } 00:21:30.780 ] 00:21:30.780 }, 00:21:30.780 { 00:21:30.780 "subsystem": "nbd", 00:21:30.780 "config": [] 00:21:30.780 } 00:21:30.780 ] 00:21:30.780 }' 00:21:30.780 [2024-11-20 10:39:02.965275] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:30.780 [2024-11-20 10:39:02.965329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075319 ] 00:21:30.780 [2024-11-20 10:39:03.049735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.780 [2024-11-20 10:39:03.078409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.040 [2024-11-20 10:39:03.212012] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.611 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.611 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.611 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.611 Running I/O for 10 seconds... 00:21:33.661 5523.00 IOPS, 21.57 MiB/s [2024-11-20T09:39:06.976Z] 5552.00 IOPS, 21.69 MiB/s [2024-11-20T09:39:07.916Z] 5612.67 IOPS, 21.92 MiB/s [2024-11-20T09:39:08.856Z] 5732.00 IOPS, 22.39 MiB/s [2024-11-20T09:39:10.237Z] 5636.40 IOPS, 22.02 MiB/s [2024-11-20T09:39:11.175Z] 5536.00 IOPS, 21.62 MiB/s [2024-11-20T09:39:12.116Z] 5475.14 IOPS, 21.39 MiB/s [2024-11-20T09:39:13.056Z] 5546.00 IOPS, 21.66 MiB/s [2024-11-20T09:39:13.996Z] 5521.11 IOPS, 21.57 MiB/s [2024-11-20T09:39:13.996Z] 5525.60 IOPS, 21.58 MiB/s 00:21:41.620 Latency(us) 00:21:41.620 [2024-11-20T09:39:13.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.620 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.620 Verification LBA range: start 0x0 length 0x2000 00:21:41.620 TLSTESTn1 : 10.01 5531.06 21.61 0.00 0.00 23112.05 4751.36 41724.59 00:21:41.620 [2024-11-20T09:39:13.996Z] =================================================================================================================== 00:21:41.620 [2024-11-20T09:39:13.996Z] Total : 5531.06 21.61 0.00 0.00 23112.05 4751.36 41724.59 00:21:41.620 { 00:21:41.620 "results": [ 00:21:41.620 { 00:21:41.620 "job": "TLSTESTn1", 00:21:41.620 "core_mask": "0x4", 00:21:41.620 "workload": "verify", 00:21:41.620 "status": "finished", 00:21:41.620 "verify_range": { 00:21:41.620 "start": 0, 00:21:41.620 "length": 8192 00:21:41.620 }, 00:21:41.620 "queue_depth": 128, 00:21:41.620 "io_size": 4096, 00:21:41.620 "runtime": 10.013094, 00:21:41.620 "iops": 5531.057633135173, 00:21:41.620 "mibps": 21.60569387943427, 00:21:41.620 "io_failed": 0, 00:21:41.620 "io_timeout": 0, 00:21:41.620 "avg_latency_us": 23112.04726167476, 00:21:41.620 "min_latency_us": 4751.36, 00:21:41.620 "max_latency_us": 41724.58666666667 00:21:41.620 } 00:21:41.620 ], 00:21:41.620 "core_count": 1 00:21:41.620 } 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2075319 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2075319 ']' 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2075319 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2075319 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2075319' 00:21:41.620 killing process with pid 2075319 00:21:41.620 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2075319 00:21:41.620 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.620 00:21:41.620 Latency(us) 00:21:41.620 [2024-11-20T09:39:13.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.620 [2024-11-20T09:39:13.996Z] =================================================================================================================== 00:21:41.620 [2024-11-20T09:39:13.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.621 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2075319 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2074971 ']' 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074971' 00:21:41.881 killing process with pid 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2074971 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2077817 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2077817 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2077817 ']' 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.881 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.142 [2024-11-20 10:39:14.293862] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:42.142 [2024-11-20 10:39:14.293922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.142 [2024-11-20 10:39:14.389891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.142 [2024-11-20 10:39:14.434852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.142 [2024-11-20 10:39:14.434904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.142 [2024-11-20 10:39:14.434912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.142 [2024-11-20 10:39:14.434920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.142 [2024-11-20 10:39:14.434931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.142 [2024-11-20 10:39:14.435709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gZYI21px8v 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gZYI21px8v 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:43.084 [2024-11-20 10:39:15.306427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.084 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:43.344 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:43.344 [2024-11-20 10:39:15.703435] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.344 [2024-11-20 10:39:15.703778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.605 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.605 malloc0 00:21:43.605 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.866 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:44.126 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2078361 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2078361 /var/tmp/bdevperf.sock 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2078361 ']' 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.386 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.386 [2024-11-20 10:39:16.564343] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:44.386 [2024-11-20 10:39:16.564414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078361 ] 00:21:44.386 [2024-11-20 10:39:16.653935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.386 [2024-11-20 10:39:16.688136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.325 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.325 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:45.325 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:45.325 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:45.585 [2024-11-20 10:39:17.721964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.585 nvme0n1 00:21:45.585 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.585 Running I/O for 1 seconds... 00:21:46.968 4907.00 IOPS, 19.17 MiB/s 00:21:46.968 Latency(us) 00:21:46.968 [2024-11-20T09:39:19.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.968 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:46.968 Verification LBA range: start 0x0 length 0x2000 00:21:46.968 nvme0n1 : 1.01 4972.71 19.42 0.00 0.00 25586.37 4478.29 30801.92 00:21:46.968 [2024-11-20T09:39:19.344Z] =================================================================================================================== 00:21:46.968 [2024-11-20T09:39:19.344Z] Total : 4972.71 19.42 0.00 0.00 25586.37 4478.29 30801.92 00:21:46.968 { 00:21:46.968 "results": [ 00:21:46.968 { 00:21:46.968 "job": "nvme0n1", 00:21:46.968 "core_mask": "0x2", 00:21:46.968 "workload": "verify", 00:21:46.968 "status": "finished", 00:21:46.968 "verify_range": { 00:21:46.968 "start": 0, 00:21:46.968 "length": 8192 00:21:46.968 }, 00:21:46.968 "queue_depth": 128, 00:21:46.968 "io_size": 4096, 00:21:46.968 "runtime": 1.012527, 00:21:46.968 "iops": 4972.706900655488, 00:21:46.968 "mibps": 19.4246363306855, 00:21:46.968 "io_failed": 0, 00:21:46.968 "io_timeout": 0, 00:21:46.968 "avg_latency_us": 25586.373783515395, 00:21:46.968 "min_latency_us": 4478.293333333333, 00:21:46.968 "max_latency_us": 30801.92 00:21:46.968 } 00:21:46.968 ], 00:21:46.968 "core_count": 1 00:21:46.968 } 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2078361 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2078361 ']' 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2078361 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.968 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078361 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078361' 00:21:46.968 killing process with pid 2078361 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2078361 00:21:46.968 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.968 00:21:46.968 Latency(us) 00:21:46.968 [2024-11-20T09:39:19.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.968 [2024-11-20T09:39:19.344Z] =================================================================================================================== 00:21:46.968 [2024-11-20T09:39:19.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2078361 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2077817 ']' 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077817' 00:21:46.968 killing process with pid 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2077817 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2078858 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2078858 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2078858 ']' 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.968 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.229 [2024-11-20 10:39:19.360914] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:47.229 [2024-11-20 10:39:19.360969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.229 [2024-11-20 10:39:19.455475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.229 [2024-11-20 10:39:19.495667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.229 [2024-11-20 10:39:19.495715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.229 [2024-11-20 10:39:19.495723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.229 [2024-11-20 10:39:19.495736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.229 [2024-11-20 10:39:19.495742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.229 [2024-11-20 10:39:19.496433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.170 [2024-11-20 10:39:20.229629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.170 malloc0 00:21:48.170 [2024-11-20 10:39:20.259849] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.170 [2024-11-20 10:39:20.260209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2079187 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2079187 /var/tmp/bdevperf.sock 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2079187 ']' 00:21:48.170 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.171 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.171 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.171 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.171 10:39:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.171 [2024-11-20 10:39:20.344125] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:48.171 [2024-11-20 10:39:20.344202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079187 ] 00:21:48.171 [2024-11-20 10:39:20.431801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.171 [2024-11-20 10:39:20.465642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.111 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.111 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:49.111 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZYI21px8v 00:21:49.111 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:49.111 [2024-11-20 10:39:21.475396] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.370 nvme0n1 00:21:49.370 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.370 Running I/O for 1 seconds... 00:21:50.311 5087.00 IOPS, 19.87 MiB/s 00:21:50.311 Latency(us) 00:21:50.311 [2024-11-20T09:39:22.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:50.311 Verification LBA range: start 0x0 length 0x2000 00:21:50.311 nvme0n1 : 1.02 5130.26 20.04 0.00 0.00 24781.60 5543.25 76458.67 00:21:50.311 [2024-11-20T09:39:22.687Z] =================================================================================================================== 00:21:50.311 [2024-11-20T09:39:22.687Z] Total : 5130.26 20.04 0.00 0.00 24781.60 5543.25 76458.67 00:21:50.311 { 00:21:50.311 "results": [ 00:21:50.311 { 00:21:50.311 "job": "nvme0n1", 00:21:50.311 "core_mask": "0x2", 00:21:50.311 "workload": "verify", 00:21:50.311 "status": "finished", 00:21:50.311 "verify_range": { 00:21:50.311 "start": 0, 00:21:50.311 "length": 8192 00:21:50.311 }, 00:21:50.311 "queue_depth": 128, 00:21:50.311 "io_size": 4096, 00:21:50.311 "runtime": 1.016713, 00:21:50.311 "iops": 5130.25799807812, 00:21:50.311 "mibps": 20.040070304992657, 00:21:50.311 "io_failed": 0, 00:21:50.311 "io_timeout": 0, 00:21:50.311 "avg_latency_us": 24781.595746421266, 00:21:50.311 "min_latency_us": 5543.253333333333, 00:21:50.311 "max_latency_us": 76458.66666666667 00:21:50.311 } 00:21:50.311 ], 00:21:50.311 "core_count": 1 00:21:50.311 } 00:21:50.571 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:50.571 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.571 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.571 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.571 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:50.571 "subsystems": [ 00:21:50.571 { 00:21:50.571 "subsystem": "keyring", 00:21:50.571 "config": [ 00:21:50.571 { 00:21:50.571 "method": "keyring_file_add_key", 00:21:50.571 "params": { 00:21:50.571 "name": "key0", 00:21:50.571 "path": "/tmp/tmp.gZYI21px8v" 00:21:50.571 } 00:21:50.571 } 00:21:50.571 ] 00:21:50.571 }, 00:21:50.571 { 00:21:50.571 "subsystem": "iobuf", 00:21:50.571 "config": [ 00:21:50.571 { 00:21:50.571 "method": "iobuf_set_options", 00:21:50.571 "params": { 00:21:50.571 "small_pool_count": 8192, 00:21:50.571 "large_pool_count": 1024, 00:21:50.571 "small_bufsize": 8192, 00:21:50.571 "large_bufsize": 135168, 00:21:50.571 "enable_numa": false 00:21:50.571 } 00:21:50.571 } 00:21:50.571 ] 00:21:50.571 }, 00:21:50.571 { 00:21:50.571 "subsystem": "sock", 00:21:50.571 "config": [ 00:21:50.571 { 00:21:50.571 "method": "sock_set_default_impl", 00:21:50.571 "params": { 00:21:50.571 "impl_name": "posix" 00:21:50.571 } 00:21:50.571 }, 00:21:50.571 { 00:21:50.572 "method": "sock_impl_set_options", 00:21:50.572 "params": { 00:21:50.572 "impl_name": "ssl", 00:21:50.572 "recv_buf_size": 4096, 00:21:50.572 "send_buf_size": 4096, 00:21:50.572 "enable_recv_pipe": true, 00:21:50.572 "enable_quickack": false, 00:21:50.572 "enable_placement_id": 0, 00:21:50.572 "enable_zerocopy_send_server": true, 00:21:50.572 "enable_zerocopy_send_client": false, 00:21:50.572 "zerocopy_threshold": 0, 00:21:50.572 "tls_version": 0, 00:21:50.572 "enable_ktls": false 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "sock_impl_set_options", 00:21:50.572 "params": { 00:21:50.572 "impl_name": "posix", 00:21:50.572 "recv_buf_size": 2097152, 00:21:50.572 "send_buf_size": 2097152, 00:21:50.572 "enable_recv_pipe": true, 00:21:50.572 "enable_quickack": false, 00:21:50.572 "enable_placement_id": 0, 00:21:50.572 "enable_zerocopy_send_server": true, 00:21:50.572 "enable_zerocopy_send_client": false, 00:21:50.572 "zerocopy_threshold": 0, 00:21:50.572 "tls_version": 0, 00:21:50.572 "enable_ktls": false 00:21:50.572 } 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "vmd", 00:21:50.572 "config": [] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "accel", 00:21:50.572 "config": [ 00:21:50.572 { 00:21:50.572 "method": "accel_set_options", 00:21:50.572 "params": { 00:21:50.572 "small_cache_size": 128, 00:21:50.572 "large_cache_size": 16, 00:21:50.572 "task_count": 2048, 00:21:50.572 "sequence_count": 2048, 00:21:50.572 "buf_count": 2048 00:21:50.572 } 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "bdev", 00:21:50.572 "config": [ 00:21:50.572 { 00:21:50.572 "method": "bdev_set_options", 00:21:50.572 "params": { 00:21:50.572 "bdev_io_pool_size": 65535, 00:21:50.572 "bdev_io_cache_size": 256, 00:21:50.572 "bdev_auto_examine": true, 00:21:50.572 "iobuf_small_cache_size": 128, 00:21:50.572 "iobuf_large_cache_size": 16 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_raid_set_options", 00:21:50.572 "params": { 00:21:50.572 "process_window_size_kb": 1024, 00:21:50.572 "process_max_bandwidth_mb_sec": 0 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_iscsi_set_options", 00:21:50.572 "params": { 00:21:50.572 "timeout_sec": 30 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_nvme_set_options", 00:21:50.572 "params": { 00:21:50.572 "action_on_timeout": "none", 00:21:50.572 "timeout_us": 0, 00:21:50.572 "timeout_admin_us": 0, 00:21:50.572 "keep_alive_timeout_ms": 10000, 00:21:50.572 "arbitration_burst": 0, 00:21:50.572 "low_priority_weight": 0, 00:21:50.572 "medium_priority_weight": 0, 00:21:50.572 "high_priority_weight": 0, 00:21:50.572 "nvme_adminq_poll_period_us": 10000, 00:21:50.572 "nvme_ioq_poll_period_us": 0, 00:21:50.572 "io_queue_requests": 0, 00:21:50.572 "delay_cmd_submit": true, 00:21:50.572 "transport_retry_count": 4, 00:21:50.572 "bdev_retry_count": 3, 00:21:50.572 "transport_ack_timeout": 0, 00:21:50.572 "ctrlr_loss_timeout_sec": 0, 00:21:50.572 "reconnect_delay_sec": 0, 00:21:50.572 "fast_io_fail_timeout_sec": 0, 00:21:50.572 "disable_auto_failback": false, 00:21:50.572 "generate_uuids": false, 00:21:50.572 "transport_tos": 0, 00:21:50.572 "nvme_error_stat": false, 00:21:50.572 "rdma_srq_size": 0, 00:21:50.572 "io_path_stat": false, 00:21:50.572 "allow_accel_sequence": false, 00:21:50.572 "rdma_max_cq_size": 0, 00:21:50.572 "rdma_cm_event_timeout_ms": 0, 00:21:50.572 "dhchap_digests": [ 00:21:50.572 "sha256", 00:21:50.572 "sha384", 00:21:50.572 "sha512" 00:21:50.572 ], 00:21:50.572 "dhchap_dhgroups": [ 00:21:50.572 "null", 00:21:50.572 "ffdhe2048", 00:21:50.572 "ffdhe3072", 00:21:50.572 "ffdhe4096", 00:21:50.572 "ffdhe6144", 00:21:50.572 "ffdhe8192" 00:21:50.572 ] 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_nvme_set_hotplug", 00:21:50.572 "params": { 00:21:50.572 "period_us": 100000, 00:21:50.572 "enable": false 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_malloc_create", 00:21:50.572 "params": { 00:21:50.572 "name": "malloc0", 00:21:50.572 "num_blocks": 8192, 00:21:50.572 "block_size": 4096, 00:21:50.572 "physical_block_size": 4096, 00:21:50.572 "uuid": "a6fa0c3f-c8b5-4479-b088-0a96310b9cea", 00:21:50.572 "optimal_io_boundary": 0, 00:21:50.572 "md_size": 0, 00:21:50.572 "dif_type": 0, 00:21:50.572 "dif_is_head_of_md": false, 00:21:50.572 "dif_pi_format": 0 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "bdev_wait_for_examine" 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "nbd", 00:21:50.572 "config": [] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "scheduler", 00:21:50.572 "config": [ 00:21:50.572 { 00:21:50.572 "method": "framework_set_scheduler", 00:21:50.572 "params": { 00:21:50.572 "name": "static" 00:21:50.572 } 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "subsystem": "nvmf", 00:21:50.572 "config": [ 00:21:50.572 { 00:21:50.572 "method": "nvmf_set_config", 00:21:50.572 "params": { 00:21:50.572 "discovery_filter": "match_any", 00:21:50.572 "admin_cmd_passthru": { 00:21:50.572 "identify_ctrlr": false 00:21:50.572 }, 00:21:50.572 "dhchap_digests": [ 00:21:50.572 "sha256", 00:21:50.572 "sha384", 00:21:50.572 "sha512" 00:21:50.572 ], 00:21:50.572 "dhchap_dhgroups": [ 00:21:50.572 "null", 00:21:50.572 "ffdhe2048", 00:21:50.572 "ffdhe3072", 00:21:50.572 "ffdhe4096", 00:21:50.572 "ffdhe6144", 00:21:50.572 "ffdhe8192" 00:21:50.572 ] 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_set_max_subsystems", 00:21:50.572 "params": { 00:21:50.572 "max_subsystems": 1024 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_set_crdt", 00:21:50.572 "params": { 00:21:50.572 "crdt1": 0, 00:21:50.572 "crdt2": 0, 00:21:50.572 "crdt3": 0 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_create_transport", 00:21:50.572 "params": { 00:21:50.572 "trtype": "TCP", 00:21:50.572 "max_queue_depth": 128, 00:21:50.572 "max_io_qpairs_per_ctrlr": 127, 00:21:50.572 "in_capsule_data_size": 4096, 00:21:50.572 "max_io_size": 131072, 00:21:50.572 "io_unit_size": 131072, 00:21:50.572 "max_aq_depth": 128, 00:21:50.572 "num_shared_buffers": 511, 00:21:50.572 "buf_cache_size": 4294967295, 00:21:50.572 "dif_insert_or_strip": false, 00:21:50.572 "zcopy": false, 00:21:50.572 "c2h_success": false, 00:21:50.572 "sock_priority": 0, 00:21:50.572 "abort_timeout_sec": 1, 00:21:50.572 "ack_timeout": 0, 00:21:50.572 "data_wr_pool_size": 0 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_create_subsystem", 00:21:50.572 "params": { 00:21:50.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.572 "allow_any_host": false, 00:21:50.572 "serial_number": "00000000000000000000", 00:21:50.572 "model_number": "SPDK bdev Controller", 00:21:50.572 "max_namespaces": 32, 00:21:50.572 "min_cntlid": 1, 00:21:50.572 "max_cntlid": 65519, 00:21:50.572 "ana_reporting": false 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_subsystem_add_host", 00:21:50.572 "params": { 00:21:50.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.572 "host": "nqn.2016-06.io.spdk:host1", 00:21:50.572 "psk": "key0" 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_subsystem_add_ns", 00:21:50.572 "params": { 00:21:50.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.572 "namespace": { 00:21:50.572 "nsid": 1, 00:21:50.572 "bdev_name": "malloc0", 00:21:50.572 "nguid": "A6FA0C3FC8B54479B0880A96310B9CEA", 00:21:50.572 "uuid": "a6fa0c3f-c8b5-4479-b088-0a96310b9cea", 00:21:50.572 "no_auto_visible": false 00:21:50.572 } 00:21:50.572 } 00:21:50.572 }, 00:21:50.572 { 00:21:50.572 "method": "nvmf_subsystem_add_listener", 00:21:50.572 "params": { 00:21:50.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.572 "listen_address": { 00:21:50.572 "trtype": "TCP", 00:21:50.572 "adrfam": "IPv4", 00:21:50.572 "traddr": "10.0.0.2", 00:21:50.572 "trsvcid": "4420" 00:21:50.572 }, 00:21:50.572 "secure_channel": false, 00:21:50.572 "sock_impl": "ssl" 00:21:50.572 } 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 } 00:21:50.572 ] 00:21:50.572 }' 00:21:50.572 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:50.833 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:50.833 "subsystems": [ 00:21:50.833 { 00:21:50.833 "subsystem": "keyring", 00:21:50.833 "config": [ 00:21:50.833 { 00:21:50.833 "method": "keyring_file_add_key", 00:21:50.833 "params": { 00:21:50.833 "name": "key0", 00:21:50.833 "path": "/tmp/tmp.gZYI21px8v" 00:21:50.833 } 00:21:50.833 } 00:21:50.833 ] 00:21:50.833 }, 00:21:50.833 { 00:21:50.833 "subsystem": "iobuf", 00:21:50.833 "config": [ 00:21:50.833 { 00:21:50.833 "method": "iobuf_set_options", 00:21:50.833 "params": { 00:21:50.833 "small_pool_count": 8192, 00:21:50.833 "large_pool_count": 1024, 00:21:50.833 "small_bufsize": 8192, 00:21:50.833 "large_bufsize": 135168, 00:21:50.833 "enable_numa": false 00:21:50.833 } 00:21:50.833 } 00:21:50.833 ] 00:21:50.833 }, 00:21:50.833 { 00:21:50.833 "subsystem": "sock", 00:21:50.833 "config": [ 00:21:50.833 { 00:21:50.833 "method": "sock_set_default_impl", 00:21:50.833 "params": { 00:21:50.833 "impl_name": "posix" 00:21:50.833 } 00:21:50.833 }, 00:21:50.833 { 00:21:50.833 "method": "sock_impl_set_options", 00:21:50.833 "params": { 00:21:50.833 "impl_name": "ssl", 00:21:50.833 "recv_buf_size": 4096, 00:21:50.833 "send_buf_size": 4096, 00:21:50.833 "enable_recv_pipe": true, 00:21:50.833 "enable_quickack": false, 00:21:50.833 "enable_placement_id": 0, 00:21:50.833 "enable_zerocopy_send_server": true, 00:21:50.833 "enable_zerocopy_send_client": false, 00:21:50.833 "zerocopy_threshold": 0, 00:21:50.833 "tls_version": 0, 00:21:50.834 "enable_ktls": false 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "sock_impl_set_options", 00:21:50.834 "params": { 00:21:50.834 "impl_name": "posix", 00:21:50.834 "recv_buf_size": 2097152, 00:21:50.834 "send_buf_size": 2097152, 00:21:50.834 "enable_recv_pipe": true, 00:21:50.834 "enable_quickack": false, 00:21:50.834 "enable_placement_id": 0, 00:21:50.834 "enable_zerocopy_send_server": true, 00:21:50.834 "enable_zerocopy_send_client": false, 00:21:50.834 "zerocopy_threshold": 0, 00:21:50.834 "tls_version": 0, 00:21:50.834 "enable_ktls": false 00:21:50.834 } 00:21:50.834 } 00:21:50.834 ] 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "subsystem": "vmd", 00:21:50.834 "config": [] 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "subsystem": "accel", 00:21:50.834 "config": [ 00:21:50.834 { 00:21:50.834 "method": "accel_set_options", 00:21:50.834 "params": { 00:21:50.834 "small_cache_size": 128, 00:21:50.834 "large_cache_size": 16, 00:21:50.834 "task_count": 2048, 00:21:50.834 "sequence_count": 2048, 00:21:50.834 "buf_count": 2048 00:21:50.834 } 00:21:50.834 } 00:21:50.834 ] 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "subsystem": "bdev", 00:21:50.834 "config": [ 00:21:50.834 { 00:21:50.834 "method": "bdev_set_options", 00:21:50.834 "params": { 00:21:50.834 "bdev_io_pool_size": 65535, 00:21:50.834 "bdev_io_cache_size": 256, 00:21:50.834 "bdev_auto_examine": true, 00:21:50.834 "iobuf_small_cache_size": 128, 00:21:50.834 "iobuf_large_cache_size": 16 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_raid_set_options", 00:21:50.834 "params": { 00:21:50.834 "process_window_size_kb": 1024, 00:21:50.834 "process_max_bandwidth_mb_sec": 0 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_iscsi_set_options", 00:21:50.834 "params": { 00:21:50.834 "timeout_sec": 30 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_nvme_set_options", 00:21:50.834 "params": { 00:21:50.834 "action_on_timeout": "none", 00:21:50.834 "timeout_us": 0, 00:21:50.834 "timeout_admin_us": 0, 00:21:50.834 "keep_alive_timeout_ms": 10000, 00:21:50.834 "arbitration_burst": 0, 00:21:50.834 "low_priority_weight": 0, 00:21:50.834 "medium_priority_weight": 0, 00:21:50.834 "high_priority_weight": 0, 00:21:50.834 "nvme_adminq_poll_period_us": 10000, 00:21:50.834 "nvme_ioq_poll_period_us": 0, 00:21:50.834 "io_queue_requests": 512, 00:21:50.834 "delay_cmd_submit": true, 00:21:50.834 "transport_retry_count": 4, 00:21:50.834 "bdev_retry_count": 3, 00:21:50.834 "transport_ack_timeout": 0, 00:21:50.834 "ctrlr_loss_timeout_sec": 0, 00:21:50.834 "reconnect_delay_sec": 0, 00:21:50.834 "fast_io_fail_timeout_sec": 0, 00:21:50.834 "disable_auto_failback": false, 00:21:50.834 "generate_uuids": false, 00:21:50.834 "transport_tos": 0, 00:21:50.834 "nvme_error_stat": false, 00:21:50.834 "rdma_srq_size": 0, 00:21:50.834 "io_path_stat": false, 00:21:50.834 "allow_accel_sequence": false, 00:21:50.834 "rdma_max_cq_size": 0, 00:21:50.834 "rdma_cm_event_timeout_ms": 0, 00:21:50.834 "dhchap_digests": [ 00:21:50.834 "sha256", 00:21:50.834 "sha384", 00:21:50.834 "sha512" 00:21:50.834 ], 00:21:50.834 "dhchap_dhgroups": [ 00:21:50.834 "null", 00:21:50.834 "ffdhe2048", 00:21:50.834 "ffdhe3072", 00:21:50.834 "ffdhe4096", 00:21:50.834 "ffdhe6144", 00:21:50.834 "ffdhe8192" 00:21:50.834 ] 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_nvme_attach_controller", 00:21:50.834 "params": { 00:21:50.834 "name": "nvme0", 00:21:50.834 "trtype": "TCP", 00:21:50.834 "adrfam": "IPv4", 00:21:50.834 "traddr": "10.0.0.2", 00:21:50.834 "trsvcid": "4420", 00:21:50.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.834 "prchk_reftag": false, 00:21:50.834 "prchk_guard": false, 00:21:50.834 "ctrlr_loss_timeout_sec": 0, 00:21:50.834 "reconnect_delay_sec": 0, 00:21:50.834 "fast_io_fail_timeout_sec": 0, 00:21:50.834 "psk": "key0", 00:21:50.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.834 "hdgst": false, 00:21:50.834 "ddgst": false, 00:21:50.834 "multipath": "multipath" 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_nvme_set_hotplug", 00:21:50.834 "params": { 00:21:50.834 "period_us": 100000, 00:21:50.834 "enable": false 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_enable_histogram", 00:21:50.834 "params": { 00:21:50.834 "name": "nvme0n1", 00:21:50.834 "enable": true 00:21:50.834 } 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "method": "bdev_wait_for_examine" 00:21:50.834 } 00:21:50.834 ] 00:21:50.834 }, 00:21:50.834 { 00:21:50.834 "subsystem": "nbd", 00:21:50.834 "config": [] 00:21:50.834 } 00:21:50.834 ] 00:21:50.834 }' 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2079187 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2079187 ']' 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2079187 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079187 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079187' 00:21:50.834 killing process with pid 2079187 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2079187 00:21:50.834 Received shutdown signal, test time was about 1.000000 seconds 00:21:50.834 00:21:50.834 Latency(us) 00:21:50.834 [2024-11-20T09:39:23.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.834 [2024-11-20T09:39:23.210Z] =================================================================================================================== 00:21:50.834 [2024-11-20T09:39:23.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.834 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2079187 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2078858 ']' 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078858' 00:21:51.096 killing process with pid 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2078858 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.096 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:51.096 "subsystems": [ 00:21:51.096 { 00:21:51.096 "subsystem": "keyring", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.096 "method": "keyring_file_add_key", 00:21:51.096 "params": { 00:21:51.096 "name": "key0", 00:21:51.096 "path": "/tmp/tmp.gZYI21px8v" 00:21:51.096 } 00:21:51.096 } 00:21:51.096 ] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "iobuf", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.096 "method": "iobuf_set_options", 00:21:51.096 "params": { 00:21:51.096 "small_pool_count": 8192, 00:21:51.096 "large_pool_count": 1024, 00:21:51.096 "small_bufsize": 8192, 00:21:51.096 "large_bufsize": 135168, 00:21:51.096 "enable_numa": false 00:21:51.096 } 00:21:51.096 } 00:21:51.096 ] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "sock", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.096 "method": "sock_set_default_impl", 00:21:51.096 "params": { 00:21:51.096 "impl_name": "posix" 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "sock_impl_set_options", 00:21:51.096 "params": { 00:21:51.096 "impl_name": "ssl", 00:21:51.096 "recv_buf_size": 4096, 00:21:51.096 "send_buf_size": 4096, 00:21:51.096 "enable_recv_pipe": true, 00:21:51.096 "enable_quickack": false, 00:21:51.096 "enable_placement_id": 0, 00:21:51.096 "enable_zerocopy_send_server": true, 00:21:51.096 "enable_zerocopy_send_client": false, 00:21:51.096 "zerocopy_threshold": 0, 00:21:51.096 "tls_version": 0, 00:21:51.096 "enable_ktls": false 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "sock_impl_set_options", 00:21:51.096 "params": { 00:21:51.096 "impl_name": "posix", 00:21:51.096 "recv_buf_size": 2097152, 00:21:51.096 "send_buf_size": 2097152, 00:21:51.096 "enable_recv_pipe": true, 00:21:51.096 "enable_quickack": false, 00:21:51.096 "enable_placement_id": 0, 00:21:51.096 "enable_zerocopy_send_server": true, 00:21:51.096 "enable_zerocopy_send_client": false, 00:21:51.096 "zerocopy_threshold": 0, 00:21:51.096 "tls_version": 0, 00:21:51.096 "enable_ktls": false 00:21:51.096 } 00:21:51.096 } 00:21:51.096 ] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "vmd", 00:21:51.096 "config": [] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "accel", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.096 "method": "accel_set_options", 00:21:51.096 "params": { 00:21:51.096 "small_cache_size": 128, 00:21:51.096 "large_cache_size": 16, 00:21:51.096 "task_count": 2048, 00:21:51.096 "sequence_count": 2048, 00:21:51.096 "buf_count": 2048 00:21:51.096 } 00:21:51.096 } 00:21:51.096 ] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "bdev", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.096 "method": "bdev_set_options", 00:21:51.096 "params": { 00:21:51.096 "bdev_io_pool_size": 65535, 00:21:51.096 "bdev_io_cache_size": 256, 00:21:51.096 "bdev_auto_examine": true, 00:21:51.096 "iobuf_small_cache_size": 128, 00:21:51.096 "iobuf_large_cache_size": 16 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_raid_set_options", 00:21:51.096 "params": { 00:21:51.096 "process_window_size_kb": 1024, 00:21:51.096 "process_max_bandwidth_mb_sec": 0 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_iscsi_set_options", 00:21:51.096 "params": { 00:21:51.096 "timeout_sec": 30 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_nvme_set_options", 00:21:51.096 "params": { 00:21:51.096 "action_on_timeout": "none", 00:21:51.096 "timeout_us": 0, 00:21:51.096 "timeout_admin_us": 0, 00:21:51.096 "keep_alive_timeout_ms": 10000, 00:21:51.096 "arbitration_burst": 0, 00:21:51.096 "low_priority_weight": 0, 00:21:51.096 "medium_priority_weight": 0, 00:21:51.096 "high_priority_weight": 0, 00:21:51.096 "nvme_adminq_poll_period_us": 10000, 00:21:51.096 "nvme_ioq_poll_period_us": 0, 00:21:51.096 "io_queue_requests": 0, 00:21:51.096 "delay_cmd_submit": true, 00:21:51.096 "transport_retry_count": 4, 00:21:51.096 "bdev_retry_count": 3, 00:21:51.096 "transport_ack_timeout": 0, 00:21:51.096 "ctrlr_loss_timeout_sec": 0, 00:21:51.096 "reconnect_delay_sec": 0, 00:21:51.096 "fast_io_fail_timeout_sec": 0, 00:21:51.096 "disable_auto_failback": false, 00:21:51.096 "generate_uuids": false, 00:21:51.096 "transport_tos": 0, 00:21:51.096 "nvme_error_stat": false, 00:21:51.096 "rdma_srq_size": 0, 00:21:51.096 "io_path_stat": false, 00:21:51.096 "allow_accel_sequence": false, 00:21:51.096 "rdma_max_cq_size": 0, 00:21:51.096 "rdma_cm_event_timeout_ms": 0, 00:21:51.096 "dhchap_digests": [ 00:21:51.096 "sha256", 00:21:51.096 "sha384", 00:21:51.096 "sha512" 00:21:51.096 ], 00:21:51.096 "dhchap_dhgroups": [ 00:21:51.096 "null", 00:21:51.096 "ffdhe2048", 00:21:51.096 "ffdhe3072", 00:21:51.096 "ffdhe4096", 00:21:51.096 "ffdhe6144", 00:21:51.096 "ffdhe8192" 00:21:51.096 ] 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_nvme_set_hotplug", 00:21:51.096 "params": { 00:21:51.096 "period_us": 100000, 00:21:51.096 "enable": false 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_malloc_create", 00:21:51.096 "params": { 00:21:51.096 "name": "malloc0", 00:21:51.096 "num_blocks": 8192, 00:21:51.096 "block_size": 4096, 00:21:51.096 "physical_block_size": 4096, 00:21:51.096 "uuid": "a6fa0c3f-c8b5-4479-b088-0a96310b9cea", 00:21:51.096 "optimal_io_boundary": 0, 00:21:51.096 "md_size": 0, 00:21:51.096 "dif_type": 0, 00:21:51.096 "dif_is_head_of_md": false, 00:21:51.096 "dif_pi_format": 0 00:21:51.096 } 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "method": "bdev_wait_for_examine" 00:21:51.096 } 00:21:51.096 ] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "nbd", 00:21:51.096 "config": [] 00:21:51.096 }, 00:21:51.096 { 00:21:51.096 "subsystem": "scheduler", 00:21:51.096 "config": [ 00:21:51.096 { 00:21:51.097 "method": "framework_set_scheduler", 00:21:51.097 "params": { 00:21:51.097 "name": "static" 00:21:51.097 } 00:21:51.097 } 00:21:51.097 ] 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "subsystem": "nvmf", 00:21:51.097 "config": [ 00:21:51.097 { 00:21:51.097 "method": "nvmf_set_config", 00:21:51.097 "params": { 00:21:51.097 "discovery_filter": "match_any", 00:21:51.097 "admin_cmd_passthru": { 00:21:51.097 "identify_ctrlr": false 00:21:51.097 }, 00:21:51.097 "dhchap_digests": [ 00:21:51.097 "sha256", 00:21:51.097 "sha384", 00:21:51.097 "sha512" 00:21:51.097 ], 00:21:51.097 "dhchap_dhgroups": [ 00:21:51.097 "null", 00:21:51.097 "ffdhe2048", 00:21:51.097 "ffdhe3072", 00:21:51.097 "ffdhe4096", 00:21:51.097 "ffdhe6144", 00:21:51.097 "ffdhe8192" 00:21:51.097 ] 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_set_max_subsystems", 00:21:51.097 "params": { 00:21:51.097 "max_subsystems": 1024 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_set_crdt", 00:21:51.097 "params": { 00:21:51.097 "crdt1": 0, 00:21:51.097 "crdt2": 0, 00:21:51.097 "crdt3": 0 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_create_transport", 00:21:51.097 "params": { 00:21:51.097 "trtype": "TCP", 00:21:51.097 "max_queue_depth": 128, 00:21:51.097 "max_io_qpairs_per_ctrlr": 127, 00:21:51.097 "in_capsule_data_size": 4096, 00:21:51.097 "max_io_size": 131072, 00:21:51.097 "io_unit_size": 131072, 00:21:51.097 "max_aq_depth": 128, 00:21:51.097 "num_shared_buffers": 511, 00:21:51.097 "buf_cache_size": 4294967295, 00:21:51.097 "dif_insert_or_strip": false, 00:21:51.097 "zcopy": false, 00:21:51.097 "c2h_success": false, 00:21:51.097 "sock_priority": 0, 00:21:51.097 "abort_timeout_sec": 1, 00:21:51.097 "ack_timeout": 0, 00:21:51.097 "data_wr_pool_size": 0 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_create_subsystem", 00:21:51.097 "params": { 00:21:51.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.097 "allow_any_host": false, 00:21:51.097 "serial_number": "00000000000000000000", 00:21:51.097 "model_number": "SPDK bdev Controller", 00:21:51.097 "max_namespaces": 32, 00:21:51.097 "min_cntlid": 1, 00:21:51.097 "max_cntlid": 65519, 00:21:51.097 "ana_reporting": false 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_subsystem_add_host", 00:21:51.097 "params": { 00:21:51.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.097 "host": "nqn.2016-06.io.spdk:host1", 00:21:51.097 "psk": "key0" 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_subsystem_add_ns", 00:21:51.097 "params": { 00:21:51.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.097 "namespace": { 00:21:51.097 "nsid": 1, 00:21:51.097 "bdev_name": "malloc0", 00:21:51.097 "nguid": "A6FA0C3FC8B54479B0880A96310B9CEA", 00:21:51.097 "uuid": "a6fa0c3f-c8b5-4479-b088-0a96310b9cea", 00:21:51.097 "no_auto_visible": false 00:21:51.097 } 00:21:51.097 } 00:21:51.097 }, 00:21:51.097 { 00:21:51.097 "method": "nvmf_subsystem_add_listener", 00:21:51.097 "params": { 00:21:51.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.097 "listen_address": { 00:21:51.097 "trtype": "TCP", 00:21:51.097 "adrfam": "IPv4", 00:21:51.097 "traddr": "10.0.0.2", 00:21:51.097 "trsvcid": "4420" 00:21:51.097 }, 00:21:51.097 "secure_channel": false, 00:21:51.097 "sock_impl": "ssl" 00:21:51.097 } 00:21:51.097 } 00:21:51.097 ] 00:21:51.097 } 00:21:51.097 ] 00:21:51.097 }' 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2079716 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2079716 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2079716 ']' 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.097 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.097 [2024-11-20 10:39:23.467917] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:51.097 [2024-11-20 10:39:23.467975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.357 [2024-11-20 10:39:23.559774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.357 [2024-11-20 10:39:23.588816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.357 [2024-11-20 10:39:23.588842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.357 [2024-11-20 10:39:23.588848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.357 [2024-11-20 10:39:23.588852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.357 [2024-11-20 10:39:23.588857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.357 [2024-11-20 10:39:23.589352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.618 [2024-11-20 10:39:23.782578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.618 [2024-11-20 10:39:23.814606] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.618 [2024-11-20 10:39:23.814805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2079922 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2079922 /var/tmp/bdevperf.sock 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2079922 ']' 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.188 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:52.188 "subsystems": [ 00:21:52.188 { 00:21:52.188 "subsystem": "keyring", 00:21:52.188 "config": [ 00:21:52.188 { 00:21:52.188 "method": "keyring_file_add_key", 00:21:52.188 "params": { 00:21:52.188 "name": "key0", 00:21:52.188 "path": "/tmp/tmp.gZYI21px8v" 00:21:52.188 } 00:21:52.188 } 00:21:52.188 ] 00:21:52.188 }, 00:21:52.188 { 00:21:52.188 "subsystem": "iobuf", 00:21:52.188 "config": [ 00:21:52.188 { 00:21:52.189 "method": "iobuf_set_options", 00:21:52.189 "params": { 00:21:52.189 "small_pool_count": 8192, 00:21:52.189 "large_pool_count": 1024, 00:21:52.189 "small_bufsize": 8192, 00:21:52.189 "large_bufsize": 135168, 00:21:52.189 "enable_numa": false 00:21:52.189 } 00:21:52.189 } 00:21:52.189 ] 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "subsystem": "sock", 00:21:52.189 "config": [ 00:21:52.189 { 00:21:52.189 "method": "sock_set_default_impl", 00:21:52.189 "params": { 00:21:52.189 "impl_name": "posix" 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "sock_impl_set_options", 00:21:52.189 "params": { 00:21:52.189 "impl_name": "ssl", 00:21:52.189 "recv_buf_size": 4096, 00:21:52.189 "send_buf_size": 4096, 00:21:52.189 "enable_recv_pipe": true, 00:21:52.189 "enable_quickack": false, 00:21:52.189 "enable_placement_id": 0, 00:21:52.189 "enable_zerocopy_send_server": true, 00:21:52.189 "enable_zerocopy_send_client": false, 00:21:52.189 "zerocopy_threshold": 0, 00:21:52.189 "tls_version": 0, 00:21:52.189 "enable_ktls": false 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "sock_impl_set_options", 00:21:52.189 "params": { 00:21:52.189 "impl_name": "posix", 00:21:52.189 "recv_buf_size": 2097152, 00:21:52.189 "send_buf_size": 2097152, 00:21:52.189 "enable_recv_pipe": true, 00:21:52.189 "enable_quickack": false, 00:21:52.189 "enable_placement_id": 0, 00:21:52.189 "enable_zerocopy_send_server": true, 00:21:52.189 "enable_zerocopy_send_client": false, 00:21:52.189 "zerocopy_threshold": 0, 00:21:52.189 "tls_version": 0, 00:21:52.189 "enable_ktls": false 00:21:52.189 } 00:21:52.189 } 00:21:52.189 ] 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "subsystem": "vmd", 00:21:52.189 "config": [] 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "subsystem": "accel", 00:21:52.189 "config": [ 00:21:52.189 { 00:21:52.189 "method": "accel_set_options", 00:21:52.189 "params": { 00:21:52.189 "small_cache_size": 128, 00:21:52.189 "large_cache_size": 16, 00:21:52.189 "task_count": 2048, 00:21:52.189 "sequence_count": 2048, 00:21:52.189 "buf_count": 2048 00:21:52.189 } 00:21:52.189 } 00:21:52.189 ] 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "subsystem": "bdev", 00:21:52.189 "config": [ 00:21:52.189 { 00:21:52.189 "method": "bdev_set_options", 00:21:52.189 "params": { 00:21:52.189 "bdev_io_pool_size": 65535, 00:21:52.189 "bdev_io_cache_size": 256, 00:21:52.189 "bdev_auto_examine": true, 00:21:52.189 "iobuf_small_cache_size": 128, 00:21:52.189 "iobuf_large_cache_size": 16 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_raid_set_options", 00:21:52.189 "params": { 00:21:52.189 "process_window_size_kb": 1024, 00:21:52.189 "process_max_bandwidth_mb_sec": 0 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_iscsi_set_options", 00:21:52.189 "params": { 00:21:52.189 "timeout_sec": 30 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_nvme_set_options", 00:21:52.189 "params": { 00:21:52.189 "action_on_timeout": "none", 00:21:52.189 "timeout_us": 0, 00:21:52.189 "timeout_admin_us": 0, 00:21:52.189 "keep_alive_timeout_ms": 10000, 00:21:52.189 "arbitration_burst": 0, 00:21:52.189 "low_priority_weight": 0, 00:21:52.189 "medium_priority_weight": 0, 00:21:52.189 "high_priority_weight": 0, 00:21:52.189 "nvme_adminq_poll_period_us": 10000, 00:21:52.189 "nvme_ioq_poll_period_us": 0, 00:21:52.189 "io_queue_requests": 512, 00:21:52.189 "delay_cmd_submit": true, 00:21:52.189 "transport_retry_count": 4, 00:21:52.189 "bdev_retry_count": 3, 00:21:52.189 "transport_ack_timeout": 0, 00:21:52.189 "ctrlr_loss_timeout_sec": 0, 00:21:52.189 "reconnect_delay_sec": 0, 00:21:52.189 "fast_io_fail_timeout_sec": 0, 00:21:52.189 "disable_auto_failback": false, 00:21:52.189 "generate_uuids": false, 00:21:52.189 "transport_tos": 0, 00:21:52.189 "nvme_error_stat": false, 00:21:52.189 "rdma_srq_size": 0, 00:21:52.189 "io_path_stat": false, 00:21:52.189 "allow_accel_sequence": false, 00:21:52.189 "rdma_max_cq_size": 0, 00:21:52.189 "rdma_cm_event_timeout_ms": 0, 00:21:52.189 "dhchap_digests": [ 00:21:52.189 "sha256", 00:21:52.189 "sha384", 00:21:52.189 "sha512" 00:21:52.189 ], 00:21:52.189 "dhchap_dhgroups": [ 00:21:52.189 "null", 00:21:52.189 "ffdhe2048", 00:21:52.189 "ffdhe3072", 00:21:52.189 "ffdhe4096", 00:21:52.189 "ffdhe6144", 00:21:52.189 "ffdhe8192" 00:21:52.189 ] 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_nvme_attach_controller", 00:21:52.189 "params": { 00:21:52.189 "name": "nvme0", 00:21:52.189 "trtype": "TCP", 00:21:52.189 "adrfam": "IPv4", 00:21:52.189 "traddr": "10.0.0.2", 00:21:52.189 "trsvcid": "4420", 00:21:52.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.189 "prchk_reftag": false, 00:21:52.189 "prchk_guard": false, 00:21:52.189 "ctrlr_loss_timeout_sec": 0, 00:21:52.189 "reconnect_delay_sec": 0, 00:21:52.189 "fast_io_fail_timeout_sec": 0, 00:21:52.189 "psk": "key0", 00:21:52.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.189 "hdgst": false, 00:21:52.189 "ddgst": false, 00:21:52.189 "multipath": "multipath" 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_nvme_set_hotplug", 00:21:52.189 "params": { 00:21:52.189 "period_us": 100000, 00:21:52.189 "enable": false 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_enable_histogram", 00:21:52.189 "params": { 00:21:52.189 "name": "nvme0n1", 00:21:52.189 "enable": true 00:21:52.189 } 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "method": "bdev_wait_for_examine" 00:21:52.189 } 00:21:52.189 ] 00:21:52.189 }, 00:21:52.189 { 00:21:52.189 "subsystem": "nbd", 00:21:52.189 "config": [] 00:21:52.189 } 00:21:52.189 ] 00:21:52.189 }' 00:21:52.189 [2024-11-20 10:39:24.354433] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:21:52.189 [2024-11-20 10:39:24.354534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079922 ] 00:21:52.189 [2024-11-20 10:39:24.443079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.189 [2024-11-20 10:39:24.472809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.450 [2024-11-20 10:39:24.607679] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.020 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:53.020 Running I/O for 1 seconds... 00:21:54.403 5341.00 IOPS, 20.86 MiB/s 00:21:54.403 Latency(us) 00:21:54.403 [2024-11-20T09:39:26.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.403 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:54.403 Verification LBA range: start 0x0 length 0x2000 00:21:54.403 nvme0n1 : 1.04 5250.47 20.51 0.00 0.00 23921.73 5106.35 36044.80 00:21:54.403 [2024-11-20T09:39:26.779Z] =================================================================================================================== 00:21:54.403 [2024-11-20T09:39:26.779Z] Total : 5250.47 20.51 0.00 0.00 23921.73 5106.35 36044.80 00:21:54.403 { 00:21:54.403 "results": [ 00:21:54.403 { 00:21:54.403 "job": "nvme0n1", 00:21:54.404 "core_mask": "0x2", 00:21:54.404 "workload": "verify", 00:21:54.404 "status": "finished", 00:21:54.404 "verify_range": { 00:21:54.404 "start": 0, 00:21:54.404 "length": 8192 00:21:54.404 }, 00:21:54.404 "queue_depth": 128, 00:21:54.404 "io_size": 4096, 00:21:54.404 "runtime": 1.041811, 00:21:54.404 "iops": 5250.472494531158, 00:21:54.404 "mibps": 20.509658181762337, 00:21:54.404 "io_failed": 0, 00:21:54.404 "io_timeout": 0, 00:21:54.404 "avg_latency_us": 23921.728273004264, 00:21:54.404 "min_latency_us": 5106.346666666666, 00:21:54.404 "max_latency_us": 36044.8 00:21:54.404 } 00:21:54.404 ], 00:21:54.404 "core_count": 1 00:21:54.404 } 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:54.404 nvmf_trace.0 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2079922 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2079922 ']' 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2079922 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079922 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:54.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079922' 00:21:54.405 killing process with pid 2079922 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2079922 00:21:54.405 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.405 00:21:54.405 Latency(us) 00:21:54.405 [2024-11-20T09:39:26.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.405 [2024-11-20T09:39:26.781Z] =================================================================================================================== 00:21:54.405 [2024-11-20T09:39:26.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2079922 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:54.405 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.406 rmmod nvme_tcp 00:21:54.406 rmmod nvme_fabrics 00:21:54.406 rmmod nvme_keyring 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2079716 ']' 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2079716 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2079716 ']' 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2079716 00:21:54.406 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079716 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079716' 00:21:54.678 killing process with pid 2079716 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2079716 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2079716 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.678 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3FW9N7sFuh /tmp/tmp.FR7mBEWrH4 /tmp/tmp.gZYI21px8v 00:21:57.225 00:21:57.225 real 1m28.370s 00:21:57.225 user 2m19.848s 00:21:57.225 sys 0m27.260s 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.225 ************************************ 00:21:57.225 END TEST nvmf_tls 00:21:57.225 ************************************ 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.225 ************************************ 00:21:57.225 START TEST nvmf_fips 00:21:57.225 ************************************ 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:57.225 * Looking for test storage... 00:21:57.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.225 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.226 --rc genhtml_branch_coverage=1 00:21:57.226 --rc genhtml_function_coverage=1 00:21:57.226 --rc genhtml_legend=1 00:21:57.226 --rc geninfo_all_blocks=1 00:21:57.226 --rc geninfo_unexecuted_blocks=1 00:21:57.226 00:21:57.226 ' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.226 --rc genhtml_branch_coverage=1 00:21:57.226 --rc genhtml_function_coverage=1 00:21:57.226 --rc genhtml_legend=1 00:21:57.226 --rc geninfo_all_blocks=1 00:21:57.226 --rc geninfo_unexecuted_blocks=1 00:21:57.226 00:21:57.226 ' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.226 --rc genhtml_branch_coverage=1 00:21:57.226 --rc genhtml_function_coverage=1 00:21:57.226 --rc genhtml_legend=1 00:21:57.226 --rc geninfo_all_blocks=1 00:21:57.226 --rc geninfo_unexecuted_blocks=1 00:21:57.226 00:21:57.226 ' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.226 --rc genhtml_branch_coverage=1 00:21:57.226 --rc genhtml_function_coverage=1 00:21:57.226 --rc genhtml_legend=1 00:21:57.226 --rc geninfo_all_blocks=1 00:21:57.226 --rc geninfo_unexecuted_blocks=1 00:21:57.226 00:21:57.226 ' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.226 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:57.227 Error setting digest 00:21:57.227 40E2FAFD5E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:57.227 40E2FAFD5E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.227 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.365 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.366 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:22:05.366 00:22:05.366 --- 10.0.0.2 ping statistics --- 00:22:05.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.366 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:22:05.366 00:22:05.366 --- 10.0.0.1 ping statistics --- 00:22:05.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.366 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2084622 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2084622 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2084622 ']' 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.366 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.366 [2024-11-20 10:39:37.119633] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:22:05.366 [2024-11-20 10:39:37.119704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.366 [2024-11-20 10:39:37.218175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.366 [2024-11-20 10:39:37.268197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.366 [2024-11-20 10:39:37.268247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.366 [2024-11-20 10:39:37.268256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.366 [2024-11-20 10:39:37.268263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.366 [2024-11-20 10:39:37.268269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.366 [2024-11-20 10:39:37.268985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.iI2 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.iI2 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.iI2 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.iI2 00:22:05.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.891 [2024-11-20 10:39:38.148047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.891 [2024-11-20 10:39:38.164042] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.891 [2024-11-20 10:39:38.164386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.891 malloc0 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2084982 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2084982 /var/tmp/bdevperf.sock 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2084982 ']' 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.891 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:06.152 [2024-11-20 10:39:38.305940] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:22:06.152 [2024-11-20 10:39:38.306019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084982 ] 00:22:06.152 [2024-11-20 10:39:38.396806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.152 [2024-11-20 10:39:38.447279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.094 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.094 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:07.095 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.iI2 00:22:07.095 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:07.095 [2024-11-20 10:39:39.436416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.356 TLSTESTn1 00:22:07.356 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.356 Running I/O for 10 seconds... 00:22:09.681 4823.00 IOPS, 18.84 MiB/s [2024-11-20T09:39:42.997Z] 4883.50 IOPS, 19.08 MiB/s [2024-11-20T09:39:43.937Z] 5223.33 IOPS, 20.40 MiB/s [2024-11-20T09:39:44.919Z] 5259.75 IOPS, 20.55 MiB/s [2024-11-20T09:39:45.885Z] 5415.40 IOPS, 21.15 MiB/s [2024-11-20T09:39:46.827Z] 5440.50 IOPS, 21.25 MiB/s [2024-11-20T09:39:47.768Z] 5496.00 IOPS, 21.47 MiB/s [2024-11-20T09:39:48.706Z] 5523.62 IOPS, 21.58 MiB/s [2024-11-20T09:39:50.087Z] 5560.11 IOPS, 21.72 MiB/s [2024-11-20T09:39:50.087Z] 5597.00 IOPS, 21.86 MiB/s 00:22:17.711 Latency(us) 00:22:17.711 [2024-11-20T09:39:50.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.711 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.711 Verification LBA range: start 0x0 length 0x2000 00:22:17.711 TLSTESTn1 : 10.01 5602.27 21.88 0.00 0.00 22815.74 6034.77 28398.93 00:22:17.711 [2024-11-20T09:39:50.087Z] =================================================================================================================== 00:22:17.711 [2024-11-20T09:39:50.087Z] Total : 5602.27 21.88 0.00 0.00 22815.74 6034.77 28398.93 00:22:17.711 { 00:22:17.711 "results": [ 00:22:17.711 { 00:22:17.711 "job": "TLSTESTn1", 00:22:17.711 "core_mask": "0x4", 00:22:17.711 "workload": "verify", 00:22:17.711 "status": "finished", 00:22:17.711 "verify_range": { 00:22:17.711 "start": 0, 00:22:17.711 "length": 8192 00:22:17.711 }, 00:22:17.711 "queue_depth": 128, 00:22:17.711 "io_size": 4096, 00:22:17.711 "runtime": 10.013081, 00:22:17.711 "iops": 5602.271668430526, 00:22:17.711 "mibps": 21.883873704806742, 00:22:17.711 "io_failed": 0, 00:22:17.711 "io_timeout": 0, 00:22:17.711 "avg_latency_us": 22815.735189199466, 00:22:17.711 "min_latency_us": 6034.7733333333335, 00:22:17.711 "max_latency_us": 28398.933333333334 00:22:17.711 } 00:22:17.711 ], 00:22:17.711 "core_count": 1 00:22:17.711 } 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:17.712 nvmf_trace.0 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2084982 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2084982 ']' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2084982 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2084982 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2084982' 00:22:17.712 killing process with pid 2084982 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2084982 00:22:17.712 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.712 00:22:17.712 Latency(us) 00:22:17.712 [2024-11-20T09:39:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.712 [2024-11-20T09:39:50.088Z] =================================================================================================================== 00:22:17.712 [2024-11-20T09:39:50.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2084982 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.712 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.712 rmmod nvme_tcp 00:22:17.712 rmmod nvme_fabrics 00:22:17.712 rmmod nvme_keyring 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2084622 ']' 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2084622 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2084622 ']' 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2084622 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2084622 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2084622' 00:22:17.972 killing process with pid 2084622 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2084622 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2084622 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.972 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.iI2 00:22:20.515 00:22:20.515 real 0m23.196s 00:22:20.515 user 0m24.330s 00:22:20.515 sys 0m10.152s 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:20.515 ************************************ 00:22:20.515 END TEST nvmf_fips 00:22:20.515 ************************************ 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:20.515 ************************************ 00:22:20.515 START TEST nvmf_control_msg_list 00:22:20.515 ************************************ 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:20.515 * Looking for test storage... 00:22:20.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.515 --rc genhtml_branch_coverage=1 00:22:20.515 --rc genhtml_function_coverage=1 00:22:20.515 --rc genhtml_legend=1 00:22:20.515 --rc geninfo_all_blocks=1 00:22:20.515 --rc geninfo_unexecuted_blocks=1 00:22:20.515 00:22:20.515 ' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.515 --rc genhtml_branch_coverage=1 00:22:20.515 --rc genhtml_function_coverage=1 00:22:20.515 --rc genhtml_legend=1 00:22:20.515 --rc geninfo_all_blocks=1 00:22:20.515 --rc geninfo_unexecuted_blocks=1 00:22:20.515 00:22:20.515 ' 00:22:20.515 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.516 --rc genhtml_branch_coverage=1 00:22:20.516 --rc genhtml_function_coverage=1 00:22:20.516 --rc genhtml_legend=1 00:22:20.516 --rc geninfo_all_blocks=1 00:22:20.516 --rc geninfo_unexecuted_blocks=1 00:22:20.516 00:22:20.516 ' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.516 --rc genhtml_branch_coverage=1 00:22:20.516 --rc genhtml_function_coverage=1 00:22:20.516 --rc genhtml_legend=1 00:22:20.516 --rc geninfo_all_blocks=1 00:22:20.516 --rc geninfo_unexecuted_blocks=1 00:22:20.516 00:22:20.516 ' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.516 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.517 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.517 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.656 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.657 10:39:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:22:28.657 00:22:28.657 --- 10.0.0.2 ping statistics --- 00:22:28.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.657 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:22:28.657 00:22:28.657 --- 10.0.0.1 ping statistics --- 00:22:28.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.657 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2091334 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2091334 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2091334 ']' 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.657 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.657 [2024-11-20 10:40:00.199199] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:22:28.657 [2024-11-20 10:40:00.199269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.657 [2024-11-20 10:40:00.298405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.657 [2024-11-20 10:40:00.348637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.657 [2024-11-20 10:40:00.348687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.657 [2024-11-20 10:40:00.348695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.657 [2024-11-20 10:40:00.348702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.657 [2024-11-20 10:40:00.348709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.657 [2024-11-20 10:40:00.349461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.657 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.657 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:28.657 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.657 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.657 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 [2024-11-20 10:40:01.059254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 Malloc0 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:28.919 [2024-11-20 10:40:01.113761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2091648 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2091650 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2091652 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2091648 00:22:28.919 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.919 [2024-11-20 10:40:01.214704] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:28.919 [2024-11-20 10:40:01.214996] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:28.919 [2024-11-20 10:40:01.215331] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:30.304 Initializing NVMe Controllers 00:22:30.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:30.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:30.304 Initialization complete. Launching workers. 00:22:30.304 ======================================================== 00:22:30.304 Latency(us) 00:22:30.304 Device Information : IOPS MiB/s Average min max 00:22:30.304 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1520.00 5.94 657.84 287.55 881.58 00:22:30.304 ======================================================== 00:22:30.304 Total : 1520.00 5.94 657.84 287.55 881.58 00:22:30.304 00:22:30.304 Initializing NVMe Controllers 00:22:30.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:30.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:30.304 Initialization complete. Launching workers. 00:22:30.304 ======================================================== 00:22:30.304 Latency(us) 00:22:30.305 Device Information : IOPS MiB/s Average min max 00:22:30.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40930.44 40754.74 41590.00 00:22:30.305 ======================================================== 00:22:30.305 Total : 25.00 0.10 40930.44 40754.74 41590.00 00:22:30.305 00:22:30.305 Initializing NVMe Controllers 00:22:30.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:30.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:30.305 Initialization complete. Launching workers. 00:22:30.305 ======================================================== 00:22:30.305 Latency(us) 00:22:30.305 Device Information : IOPS MiB/s Average min max 00:22:30.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40921.24 40673.12 41317.83 00:22:30.305 ======================================================== 00:22:30.305 Total : 25.00 0.10 40921.24 40673.12 41317.83 00:22:30.305 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2091650 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2091652 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.305 rmmod nvme_tcp 00:22:30.305 rmmod nvme_fabrics 00:22:30.305 rmmod nvme_keyring 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2091334 ']' 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2091334 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2091334 ']' 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2091334 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2091334 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2091334' 00:22:30.305 killing process with pid 2091334 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2091334 00:22:30.305 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2091334 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.566 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.478 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.739 00:22:32.739 real 0m12.457s 00:22:32.739 user 0m8.180s 00:22:32.739 sys 0m6.447s 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:32.739 ************************************ 00:22:32.739 END TEST nvmf_control_msg_list 00:22:32.739 ************************************ 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.739 ************************************ 00:22:32.739 START TEST nvmf_wait_for_buf 00:22:32.739 ************************************ 00:22:32.739 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:32.739 * Looking for test storage... 00:22:32.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.739 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.739 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.739 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.003 --rc genhtml_branch_coverage=1 00:22:33.003 --rc genhtml_function_coverage=1 00:22:33.003 --rc genhtml_legend=1 00:22:33.003 --rc geninfo_all_blocks=1 00:22:33.003 --rc geninfo_unexecuted_blocks=1 00:22:33.003 00:22:33.003 ' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.003 --rc genhtml_branch_coverage=1 00:22:33.003 --rc genhtml_function_coverage=1 00:22:33.003 --rc genhtml_legend=1 00:22:33.003 --rc geninfo_all_blocks=1 00:22:33.003 --rc geninfo_unexecuted_blocks=1 00:22:33.003 00:22:33.003 ' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.003 --rc genhtml_branch_coverage=1 00:22:33.003 --rc genhtml_function_coverage=1 00:22:33.003 --rc genhtml_legend=1 00:22:33.003 --rc geninfo_all_blocks=1 00:22:33.003 --rc geninfo_unexecuted_blocks=1 00:22:33.003 00:22:33.003 ' 00:22:33.003 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.003 --rc genhtml_branch_coverage=1 00:22:33.004 --rc genhtml_function_coverage=1 00:22:33.004 --rc genhtml_legend=1 00:22:33.004 --rc geninfo_all_blocks=1 00:22:33.004 --rc geninfo_unexecuted_blocks=1 00:22:33.004 00:22:33.004 ' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.004 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.142 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.142 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:41.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:41.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:41.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:41.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.143 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:22:41.144 00:22:41.144 --- 10.0.0.2 ping statistics --- 00:22:41.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.144 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:22:41.144 00:22:41.144 --- 10.0.0.1 ping statistics --- 00:22:41.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.144 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2096031 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2096031 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2096031 ']' 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.144 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 [2024-11-20 10:40:12.733941] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:22:41.144 [2024-11-20 10:40:12.734006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.144 [2024-11-20 10:40:12.832443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.144 [2024-11-20 10:40:12.883263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.144 [2024-11-20 10:40:12.883312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.144 [2024-11-20 10:40:12.883320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.144 [2024-11-20 10:40:12.883328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.144 [2024-11-20 10:40:12.883335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.144 [2024-11-20 10:40:12.884080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 Malloc0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 [2024-11-20 10:40:13.704761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.407 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.408 [2024-11-20 10:40:13.741082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.408 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:41.667 [2024-11-20 10:40:13.852302] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:43.051 Initializing NVMe Controllers 00:22:43.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:43.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:43.051 Initialization complete. Launching workers. 00:22:43.051 ======================================================== 00:22:43.051 Latency(us) 00:22:43.051 Device Information : IOPS MiB/s Average min max 00:22:43.051 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.22 8012.54 63852.16 00:22:43.051 ======================================================== 00:22:43.051 Total : 129.00 16.12 32295.22 8012.54 63852.16 00:22:43.051 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.051 rmmod nvme_tcp 00:22:43.051 rmmod nvme_fabrics 00:22:43.051 rmmod nvme_keyring 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2096031 ']' 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2096031 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2096031 ']' 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2096031 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.051 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2096031 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2096031' 00:22:43.312 killing process with pid 2096031 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2096031 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2096031 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.312 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.854 00:22:45.854 real 0m12.721s 00:22:45.854 user 0m5.070s 00:22:45.854 sys 0m6.233s 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.854 ************************************ 00:22:45.854 END TEST nvmf_wait_for_buf 00:22:45.854 ************************************ 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.854 10:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:53.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:53.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.991 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:53.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:53.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.992 ************************************ 00:22:53.992 START TEST nvmf_perf_adq 00:22:53.992 ************************************ 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:53.992 * Looking for test storage... 00:22:53.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.992 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.992 --rc genhtml_branch_coverage=1 00:22:53.992 --rc genhtml_function_coverage=1 00:22:53.992 --rc genhtml_legend=1 00:22:53.992 --rc geninfo_all_blocks=1 00:22:53.992 --rc geninfo_unexecuted_blocks=1 00:22:53.992 00:22:53.992 ' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.992 --rc genhtml_branch_coverage=1 00:22:53.992 --rc genhtml_function_coverage=1 00:22:53.992 --rc genhtml_legend=1 00:22:53.992 --rc geninfo_all_blocks=1 00:22:53.992 --rc geninfo_unexecuted_blocks=1 00:22:53.992 00:22:53.992 ' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.992 --rc genhtml_branch_coverage=1 00:22:53.992 --rc genhtml_function_coverage=1 00:22:53.992 --rc genhtml_legend=1 00:22:53.992 --rc geninfo_all_blocks=1 00:22:53.992 --rc geninfo_unexecuted_blocks=1 00:22:53.992 00:22:53.992 ' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.992 --rc genhtml_branch_coverage=1 00:22:53.992 --rc genhtml_function_coverage=1 00:22:53.992 --rc genhtml_legend=1 00:22:53.992 --rc geninfo_all_blocks=1 00:22:53.992 --rc geninfo_unexecuted_blocks=1 00:22:53.992 00:22:53.992 ' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.992 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.993 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:00.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:00.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.575 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:00.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:00.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:00.576 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:01.517 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:04.063 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.368 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.369 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.369 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.369 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:23:09.369 00:23:09.369 --- 10.0.0.2 ping statistics --- 00:23:09.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.369 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:09.369 00:23:09.369 --- 10.0.0.1 ping statistics --- 00:23:09.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.369 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2106266 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2106266 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2106266 ']' 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.369 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.369 [2024-11-20 10:40:41.380209] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:23:09.369 [2024-11-20 10:40:41.380277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.369 [2024-11-20 10:40:41.480959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.369 [2024-11-20 10:40:41.535617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.369 [2024-11-20 10:40:41.535670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.369 [2024-11-20 10:40:41.535679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.369 [2024-11-20 10:40:41.535686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.369 [2024-11-20 10:40:41.535693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.369 [2024-11-20 10:40:41.537757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.369 [2024-11-20 10:40:41.537917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.369 [2024-11-20 10:40:41.538052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.369 [2024-11-20 10:40:41.538051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:09.941 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 [2024-11-20 10:40:42.410366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 Malloc1 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.202 [2024-11-20 10:40:42.488503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2106509 00:23:10.202 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:10.203 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:12.746 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:12.746 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.746 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.746 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.746 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:12.746 "tick_rate": 2400000000, 00:23:12.746 "poll_groups": [ 00:23:12.746 { 00:23:12.746 "name": "nvmf_tgt_poll_group_000", 00:23:12.746 "admin_qpairs": 1, 00:23:12.746 "io_qpairs": 1, 00:23:12.746 "current_admin_qpairs": 1, 00:23:12.746 "current_io_qpairs": 1, 00:23:12.746 "pending_bdev_io": 0, 00:23:12.746 "completed_nvme_io": 15904, 00:23:12.746 "transports": [ 00:23:12.746 { 00:23:12.746 "trtype": "TCP" 00:23:12.746 } 00:23:12.746 ] 00:23:12.746 }, 00:23:12.746 { 00:23:12.746 "name": "nvmf_tgt_poll_group_001", 00:23:12.746 "admin_qpairs": 0, 00:23:12.746 "io_qpairs": 1, 00:23:12.746 "current_admin_qpairs": 0, 00:23:12.746 "current_io_qpairs": 1, 00:23:12.746 "pending_bdev_io": 0, 00:23:12.746 "completed_nvme_io": 18004, 00:23:12.746 "transports": [ 00:23:12.746 { 00:23:12.746 "trtype": "TCP" 00:23:12.746 } 00:23:12.746 ] 00:23:12.747 }, 00:23:12.747 { 00:23:12.747 "name": "nvmf_tgt_poll_group_002", 00:23:12.747 "admin_qpairs": 0, 00:23:12.747 "io_qpairs": 1, 00:23:12.747 "current_admin_qpairs": 0, 00:23:12.747 "current_io_qpairs": 1, 00:23:12.747 "pending_bdev_io": 0, 00:23:12.747 "completed_nvme_io": 18130, 00:23:12.747 "transports": [ 00:23:12.747 { 00:23:12.747 "trtype": "TCP" 00:23:12.747 } 00:23:12.747 ] 00:23:12.747 }, 00:23:12.747 { 00:23:12.747 "name": "nvmf_tgt_poll_group_003", 00:23:12.747 "admin_qpairs": 0, 00:23:12.747 "io_qpairs": 1, 00:23:12.747 "current_admin_qpairs": 0, 00:23:12.747 "current_io_qpairs": 1, 00:23:12.747 "pending_bdev_io": 0, 00:23:12.747 "completed_nvme_io": 15884, 00:23:12.747 "transports": [ 00:23:12.747 { 00:23:12.747 "trtype": "TCP" 00:23:12.747 } 00:23:12.747 ] 00:23:12.747 } 00:23:12.747 ] 00:23:12.747 }' 00:23:12.747 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:12.747 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:12.747 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:12.747 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:12.747 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2106509 00:23:20.879 Initializing NVMe Controllers 00:23:20.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:20.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:20.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:20.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:20.879 Initialization complete. Launching workers. 00:23:20.879 ======================================================== 00:23:20.879 Latency(us) 00:23:20.879 Device Information : IOPS MiB/s Average min max 00:23:20.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12452.10 48.64 5140.12 1550.46 13110.28 00:23:20.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13397.70 52.33 4777.49 1260.19 13088.42 00:23:20.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13564.10 52.98 4731.50 1268.67 45218.71 00:23:20.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12891.30 50.36 4964.43 1291.65 13055.70 00:23:20.879 ======================================================== 00:23:20.879 Total : 52305.20 204.32 4897.97 1260.19 45218.71 00:23:20.879 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.879 rmmod nvme_tcp 00:23:20.879 rmmod nvme_fabrics 00:23:20.879 rmmod nvme_keyring 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2106266 ']' 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2106266 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2106266 ']' 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2106266 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106266 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106266' 00:23:20.879 killing process with pid 2106266 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2106266 00:23:20.879 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2106266 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.879 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.833 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.833 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:22.833 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:22.833 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:24.745 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:26.664 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:31.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:31.960 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:31.960 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:31.960 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.960 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.961 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:31.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:23:31.961 00:23:31.961 --- 10.0.0.2 ping statistics --- 00:23:31.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.961 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:23:31.961 00:23:31.961 --- 10.0.0.1 ping statistics --- 00:23:31.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.961 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:31.961 net.core.busy_poll = 1 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:31.961 net.core.busy_read = 1 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:31.961 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:32.274 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2111081 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2111081 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2111081 ']' 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.275 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.275 [2024-11-20 10:41:04.585497] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:23:32.275 [2024-11-20 10:41:04.585567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.560 [2024-11-20 10:41:04.691768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.560 [2024-11-20 10:41:04.744135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.560 [2024-11-20 10:41:04.744200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.560 [2024-11-20 10:41:04.744209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.560 [2024-11-20 10:41:04.744216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.560 [2024-11-20 10:41:04.744223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.560 [2024-11-20 10:41:04.746257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.560 [2024-11-20 10:41:04.746850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.560 [2024-11-20 10:41:04.747014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.560 [2024-11-20 10:41:04.747016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.153 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 [2024-11-20 10:41:05.611345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 Malloc1 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 [2024-11-20 10:41:05.688376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2111437 00:23:33.414 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:33.415 10:41:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:35.957 "tick_rate": 2400000000, 00:23:35.957 "poll_groups": [ 00:23:35.957 { 00:23:35.957 "name": "nvmf_tgt_poll_group_000", 00:23:35.957 "admin_qpairs": 1, 00:23:35.957 "io_qpairs": 4, 00:23:35.957 "current_admin_qpairs": 1, 00:23:35.957 "current_io_qpairs": 4, 00:23:35.957 "pending_bdev_io": 0, 00:23:35.957 "completed_nvme_io": 33307, 00:23:35.957 "transports": [ 00:23:35.957 { 00:23:35.957 "trtype": "TCP" 00:23:35.957 } 00:23:35.957 ] 00:23:35.957 }, 00:23:35.957 { 00:23:35.957 "name": "nvmf_tgt_poll_group_001", 00:23:35.957 "admin_qpairs": 0, 00:23:35.957 "io_qpairs": 0, 00:23:35.957 "current_admin_qpairs": 0, 00:23:35.957 "current_io_qpairs": 0, 00:23:35.957 "pending_bdev_io": 0, 00:23:35.957 "completed_nvme_io": 0, 00:23:35.957 "transports": [ 00:23:35.957 { 00:23:35.957 "trtype": "TCP" 00:23:35.957 } 00:23:35.957 ] 00:23:35.957 }, 00:23:35.957 { 00:23:35.957 "name": "nvmf_tgt_poll_group_002", 00:23:35.957 "admin_qpairs": 0, 00:23:35.957 "io_qpairs": 0, 00:23:35.957 "current_admin_qpairs": 0, 00:23:35.957 "current_io_qpairs": 0, 00:23:35.957 "pending_bdev_io": 0, 00:23:35.957 "completed_nvme_io": 0, 00:23:35.957 "transports": [ 00:23:35.957 { 00:23:35.957 "trtype": "TCP" 00:23:35.957 } 00:23:35.957 ] 00:23:35.957 }, 00:23:35.957 { 00:23:35.957 "name": "nvmf_tgt_poll_group_003", 00:23:35.957 "admin_qpairs": 0, 00:23:35.957 "io_qpairs": 0, 00:23:35.957 "current_admin_qpairs": 0, 00:23:35.957 "current_io_qpairs": 0, 00:23:35.957 "pending_bdev_io": 0, 00:23:35.957 "completed_nvme_io": 0, 00:23:35.957 "transports": [ 00:23:35.957 { 00:23:35.957 "trtype": "TCP" 00:23:35.957 } 00:23:35.957 ] 00:23:35.957 } 00:23:35.957 ] 00:23:35.957 }' 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:23:35.957 10:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2111437 00:23:44.088 Initializing NVMe Controllers 00:23:44.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:44.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:44.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:44.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:44.088 Initialization complete. Launching workers. 00:23:44.088 ======================================================== 00:23:44.088 Latency(us) 00:23:44.088 Device Information : IOPS MiB/s Average min max 00:23:44.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8652.10 33.80 7397.95 990.39 60282.87 00:23:44.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5755.20 22.48 11144.28 1574.75 57134.65 00:23:44.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5865.10 22.91 10926.66 1166.79 60084.95 00:23:44.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4690.40 18.32 13646.15 1888.51 57265.75 00:23:44.088 ======================================================== 00:23:44.088 Total : 24962.79 97.51 10264.77 990.39 60282.87 00:23:44.088 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.088 rmmod nvme_tcp 00:23:44.088 rmmod nvme_fabrics 00:23:44.088 rmmod nvme_keyring 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2111081 ']' 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2111081 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2111081 ']' 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2111081 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.088 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111081 00:23:44.088 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.088 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111081' 00:23:44.089 killing process with pid 2111081 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2111081 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2111081 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.089 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:47.387 00:23:47.387 real 0m54.364s 00:23:47.387 user 2m50.970s 00:23:47.387 sys 0m11.328s 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.387 ************************************ 00:23:47.387 END TEST nvmf_perf_adq 00:23:47.387 ************************************ 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:47.387 ************************************ 00:23:47.387 START TEST nvmf_shutdown 00:23:47.387 ************************************ 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:47.387 * Looking for test storage... 00:23:47.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:47.387 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.388 --rc genhtml_branch_coverage=1 00:23:47.388 --rc genhtml_function_coverage=1 00:23:47.388 --rc genhtml_legend=1 00:23:47.388 --rc geninfo_all_blocks=1 00:23:47.388 --rc geninfo_unexecuted_blocks=1 00:23:47.388 00:23:47.388 ' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.388 --rc genhtml_branch_coverage=1 00:23:47.388 --rc genhtml_function_coverage=1 00:23:47.388 --rc genhtml_legend=1 00:23:47.388 --rc geninfo_all_blocks=1 00:23:47.388 --rc geninfo_unexecuted_blocks=1 00:23:47.388 00:23:47.388 ' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.388 --rc genhtml_branch_coverage=1 00:23:47.388 --rc genhtml_function_coverage=1 00:23:47.388 --rc genhtml_legend=1 00:23:47.388 --rc geninfo_all_blocks=1 00:23:47.388 --rc geninfo_unexecuted_blocks=1 00:23:47.388 00:23:47.388 ' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.388 --rc genhtml_branch_coverage=1 00:23:47.388 --rc genhtml_function_coverage=1 00:23:47.388 --rc genhtml_legend=1 00:23:47.388 --rc geninfo_all_blocks=1 00:23:47.388 --rc geninfo_unexecuted_blocks=1 00:23:47.388 00:23:47.388 ' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:47.388 ************************************ 00:23:47.388 START TEST nvmf_shutdown_tc1 00:23:47.388 ************************************ 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.388 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.529 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:55.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:55.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:55.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:55.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.530 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:23:55.530 00:23:55.530 --- 10.0.0.2 ping statistics --- 00:23:55.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.530 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:23:55.530 00:23:55.530 --- 10.0.0.1 ping statistics --- 00:23:55.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.530 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2117904 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2117904 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:55.530 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2117904 ']' 00:23:55.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.531 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.531 [2024-11-20 10:41:27.246221] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:23:55.531 [2024-11-20 10:41:27.246288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.531 [2024-11-20 10:41:27.346218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.531 [2024-11-20 10:41:27.398657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.531 [2024-11-20 10:41:27.398707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.531 [2024-11-20 10:41:27.398716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.531 [2024-11-20 10:41:27.398723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.531 [2024-11-20 10:41:27.398730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.531 [2024-11-20 10:41:27.400745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.531 [2024-11-20 10:41:27.400907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.531 [2024-11-20 10:41:27.401068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.531 [2024-11-20 10:41:27.401068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.792 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.792 [2024-11-20 10:41:28.124801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:55.793 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.054 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:56.054 Malloc1 00:23:56.054 [2024-11-20 10:41:28.249964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.054 Malloc2 00:23:56.054 Malloc3 00:23:56.054 Malloc4 00:23:56.054 Malloc5 00:23:56.316 Malloc6 00:23:56.316 Malloc7 00:23:56.316 Malloc8 00:23:56.316 Malloc9 00:23:56.316 Malloc10 00:23:56.316 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.316 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:56.316 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.316 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2118267 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2118267 /var/tmp/bdevperf.sock 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2118267 ']' 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:56.578 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 [2024-11-20 10:41:28.765704] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:23:56.579 [2024-11-20 10:41:28.765782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.579 { 00:23:56.579 "params": { 00:23:56.579 "name": "Nvme$subsystem", 00:23:56.579 "trtype": "$TEST_TRANSPORT", 00:23:56.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.579 "adrfam": "ipv4", 00:23:56.579 "trsvcid": "$NVMF_PORT", 00:23:56.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.579 "hdgst": ${hdgst:-false}, 00:23:56.579 "ddgst": ${ddgst:-false} 00:23:56.579 }, 00:23:56.579 "method": "bdev_nvme_attach_controller" 00:23:56.579 } 00:23:56.579 EOF 00:23:56.579 )") 00:23:56.579 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.580 { 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme$subsystem", 00:23:56.580 "trtype": "$TEST_TRANSPORT", 00:23:56.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "$NVMF_PORT", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.580 "hdgst": ${hdgst:-false}, 00:23:56.580 "ddgst": ${ddgst:-false} 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 } 00:23:56.580 EOF 00:23:56.580 )") 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.580 { 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme$subsystem", 00:23:56.580 "trtype": "$TEST_TRANSPORT", 00:23:56.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "$NVMF_PORT", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.580 "hdgst": ${hdgst:-false}, 00:23:56.580 "ddgst": ${ddgst:-false} 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 } 00:23:56.580 EOF 00:23:56.580 )") 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:56.580 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme1", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme2", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme3", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme4", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme5", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme6", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme7", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme8", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme9", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 },{ 00:23:56.580 "params": { 00:23:56.580 "name": "Nvme10", 00:23:56.580 "trtype": "tcp", 00:23:56.580 "traddr": "10.0.0.2", 00:23:56.580 "adrfam": "ipv4", 00:23:56.580 "trsvcid": "4420", 00:23:56.580 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:56.580 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:56.580 "hdgst": false, 00:23:56.580 "ddgst": false 00:23:56.580 }, 00:23:56.580 "method": "bdev_nvme_attach_controller" 00:23:56.580 }' 00:23:56.580 [2024-11-20 10:41:28.861855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.580 [2024-11-20 10:41:28.914648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2118267 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:57.965 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:58.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2118267 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2117904 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.908 { 00:23:58.908 "params": { 00:23:58.908 "name": "Nvme$subsystem", 00:23:58.908 "trtype": "$TEST_TRANSPORT", 00:23:58.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.908 "adrfam": "ipv4", 00:23:58.908 "trsvcid": "$NVMF_PORT", 00:23:58.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.908 "hdgst": ${hdgst:-false}, 00:23:58.908 "ddgst": ${ddgst:-false} 00:23:58.908 }, 00:23:58.908 "method": "bdev_nvme_attach_controller" 00:23:58.908 } 00:23:58.908 EOF 00:23:58.908 )") 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.908 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.908 { 00:23:58.908 "params": { 00:23:58.908 "name": "Nvme$subsystem", 00:23:58.908 "trtype": "$TEST_TRANSPORT", 00:23:58.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.908 "adrfam": "ipv4", 00:23:58.908 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 [2024-11-20 10:41:31.241785] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:23:58.909 [2024-11-20 10:41:31.241841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118662 ] 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.909 { 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme$subsystem", 00:23:58.909 "trtype": "$TEST_TRANSPORT", 00:23:58.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "$NVMF_PORT", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.909 "hdgst": ${hdgst:-false}, 00:23:58.909 "ddgst": ${ddgst:-false} 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 } 00:23:58.909 EOF 00:23:58.909 )") 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:58.909 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme1", 00:23:58.909 "trtype": "tcp", 00:23:58.909 "traddr": "10.0.0.2", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "4420", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.909 "hdgst": false, 00:23:58.909 "ddgst": false 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 },{ 00:23:58.909 "params": { 00:23:58.909 "name": "Nvme2", 00:23:58.909 "trtype": "tcp", 00:23:58.909 "traddr": "10.0.0.2", 00:23:58.909 "adrfam": "ipv4", 00:23:58.909 "trsvcid": "4420", 00:23:58.909 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.909 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.909 "hdgst": false, 00:23:58.909 "ddgst": false 00:23:58.909 }, 00:23:58.909 "method": "bdev_nvme_attach_controller" 00:23:58.909 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme3", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme4", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme5", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme6", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme7", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme8", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme9", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 },{ 00:23:58.910 "params": { 00:23:58.910 "name": "Nvme10", 00:23:58.910 "trtype": "tcp", 00:23:58.910 "traddr": "10.0.0.2", 00:23:58.910 "adrfam": "ipv4", 00:23:58.910 "trsvcid": "4420", 00:23:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:58.910 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:58.910 "hdgst": false, 00:23:58.910 "ddgst": false 00:23:58.910 }, 00:23:58.910 "method": "bdev_nvme_attach_controller" 00:23:58.910 }' 00:23:59.170 [2024-11-20 10:41:31.331638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.170 [2024-11-20 10:41:31.367723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.550 Running I/O for 1 seconds... 00:24:01.490 1856.00 IOPS, 116.00 MiB/s 00:24:01.490 Latency(us) 00:24:01.490 [2024-11-20T09:41:33.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme1n1 : 1.09 234.58 14.66 0.00 0.00 269747.41 22719.15 242920.11 00:24:01.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme2n1 : 1.18 217.84 13.61 0.00 0.00 285950.51 21080.75 262144.00 00:24:01.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme3n1 : 1.06 242.35 15.15 0.00 0.00 251498.67 20971.52 242920.11 00:24:01.490 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme4n1 : 1.08 235.99 14.75 0.00 0.00 253850.03 38447.79 221948.59 00:24:01.490 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme5n1 : 1.19 269.49 16.84 0.00 0.00 219075.58 17694.72 244667.73 00:24:01.490 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme6n1 : 1.14 228.87 14.30 0.00 0.00 247767.20 4423.68 246415.36 00:24:01.490 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme7n1 : 1.19 269.27 16.83 0.00 0.00 211918.34 9939.63 242920.11 00:24:01.490 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme8n1 : 1.19 267.99 16.75 0.00 0.00 208715.69 8137.39 246415.36 00:24:01.490 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme9n1 : 1.20 266.80 16.68 0.00 0.00 206494.81 13434.88 242920.11 00:24:01.490 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.490 Verification LBA range: start 0x0 length 0x400 00:24:01.490 Nvme10n1 : 1.18 216.62 13.54 0.00 0.00 249015.25 14964.05 267386.88 00:24:01.490 [2024-11-20T09:41:33.866Z] =================================================================================================================== 00:24:01.490 [2024-11-20T09:41:33.866Z] Total : 2449.79 153.11 0.00 0.00 237801.65 4423.68 267386.88 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.751 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.751 rmmod nvme_tcp 00:24:01.751 rmmod nvme_fabrics 00:24:01.751 rmmod nvme_keyring 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2117904 ']' 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2117904 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2117904 ']' 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2117904 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117904 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117904' 00:24:01.751 killing process with pid 2117904 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2117904 00:24:01.751 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2117904 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.011 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.554 00:24:04.554 real 0m16.756s 00:24:04.554 user 0m33.512s 00:24:04.554 sys 0m6.988s 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:04.554 ************************************ 00:24:04.554 END TEST nvmf_shutdown_tc1 00:24:04.554 ************************************ 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.554 ************************************ 00:24:04.554 START TEST nvmf_shutdown_tc2 00:24:04.554 ************************************ 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.554 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.555 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.555 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.555 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:24:04.555 00:24:04.555 --- 10.0.0.2 ping statistics --- 00:24:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.555 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:24:04.555 00:24:04.555 --- 10.0.0.1 ping statistics --- 00:24:04.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.555 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.555 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2119779 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2119779 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2119779 ']' 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.556 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:04.556 [2024-11-20 10:41:36.889462] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:04.556 [2024-11-20 10:41:36.889516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.815 [2024-11-20 10:41:36.983721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.815 [2024-11-20 10:41:37.024149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.815 [2024-11-20 10:41:37.024195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.815 [2024-11-20 10:41:37.024202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.815 [2024-11-20 10:41:37.024207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.815 [2024-11-20 10:41:37.024212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.815 [2024-11-20 10:41:37.025867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.815 [2024-11-20 10:41:37.026025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.815 [2024-11-20 10:41:37.026197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:04.815 [2024-11-20 10:41:37.026217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.383 [2024-11-20 10:41:37.748003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.383 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.643 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.643 Malloc1 00:24:05.643 [2024-11-20 10:41:37.859248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.643 Malloc2 00:24:05.643 Malloc3 00:24:05.643 Malloc4 00:24:05.644 Malloc5 00:24:05.904 Malloc6 00:24:05.904 Malloc7 00:24:05.904 Malloc8 00:24:05.904 Malloc9 00:24:05.904 Malloc10 00:24:05.904 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.904 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2120162 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2120162 /var/tmp/bdevperf.sock 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2120162 ']' 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:05.905 { 00:24:05.905 "params": { 00:24:05.905 "name": "Nvme$subsystem", 00:24:05.905 "trtype": "$TEST_TRANSPORT", 00:24:05.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.905 "adrfam": "ipv4", 00:24:05.905 "trsvcid": "$NVMF_PORT", 00:24:05.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.905 "hdgst": ${hdgst:-false}, 00:24:05.905 "ddgst": ${ddgst:-false} 00:24:05.905 }, 00:24:05.905 "method": "bdev_nvme_attach_controller" 00:24:05.905 } 00:24:05.905 EOF 00:24:05.905 )") 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:05.905 { 00:24:05.905 "params": { 00:24:05.905 "name": "Nvme$subsystem", 00:24:05.905 "trtype": "$TEST_TRANSPORT", 00:24:05.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.905 "adrfam": "ipv4", 00:24:05.905 "trsvcid": "$NVMF_PORT", 00:24:05.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.905 "hdgst": ${hdgst:-false}, 00:24:05.905 "ddgst": ${ddgst:-false} 00:24:05.905 }, 00:24:05.905 "method": "bdev_nvme_attach_controller" 00:24:05.905 } 00:24:05.905 EOF 00:24:05.905 )") 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:05.905 { 00:24:05.905 "params": { 00:24:05.905 "name": "Nvme$subsystem", 00:24:05.905 "trtype": "$TEST_TRANSPORT", 00:24:05.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.905 "adrfam": "ipv4", 00:24:05.905 "trsvcid": "$NVMF_PORT", 00:24:05.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.905 "hdgst": ${hdgst:-false}, 00:24:05.905 "ddgst": ${ddgst:-false} 00:24:05.905 }, 00:24:05.905 "method": "bdev_nvme_attach_controller" 00:24:05.905 } 00:24:05.905 EOF 00:24:05.905 )") 00:24:05.905 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.165 "ddgst": ${ddgst:-false} 00:24:06.165 }, 00:24:06.165 "method": "bdev_nvme_attach_controller" 00:24:06.165 } 00:24:06.165 EOF 00:24:06.165 )") 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.165 "ddgst": ${ddgst:-false} 00:24:06.165 }, 00:24:06.165 "method": "bdev_nvme_attach_controller" 00:24:06.165 } 00:24:06.165 EOF 00:24:06.165 )") 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.165 "ddgst": ${ddgst:-false} 00:24:06.165 }, 00:24:06.165 "method": "bdev_nvme_attach_controller" 00:24:06.165 } 00:24:06.165 EOF 00:24:06.165 )") 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 [2024-11-20 10:41:38.301027] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:06.165 [2024-11-20 10:41:38.301082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120162 ] 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.165 "ddgst": ${ddgst:-false} 00:24:06.165 }, 00:24:06.165 "method": "bdev_nvme_attach_controller" 00:24:06.165 } 00:24:06.165 EOF 00:24:06.165 )") 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.165 "ddgst": ${ddgst:-false} 00:24:06.165 }, 00:24:06.165 "method": "bdev_nvme_attach_controller" 00:24:06.165 } 00:24:06.165 EOF 00:24:06.165 )") 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.165 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.165 { 00:24:06.165 "params": { 00:24:06.165 "name": "Nvme$subsystem", 00:24:06.165 "trtype": "$TEST_TRANSPORT", 00:24:06.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.165 "adrfam": "ipv4", 00:24:06.165 "trsvcid": "$NVMF_PORT", 00:24:06.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.165 "hdgst": ${hdgst:-false}, 00:24:06.166 "ddgst": ${ddgst:-false} 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 } 00:24:06.166 EOF 00:24:06.166 )") 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.166 { 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme$subsystem", 00:24:06.166 "trtype": "$TEST_TRANSPORT", 00:24:06.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "$NVMF_PORT", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.166 "hdgst": ${hdgst:-false}, 00:24:06.166 "ddgst": ${ddgst:-false} 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 } 00:24:06.166 EOF 00:24:06.166 )") 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:06.166 10:41:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme1", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme2", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme3", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme4", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme5", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme6", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme7", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme8", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme9", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 },{ 00:24:06.166 "params": { 00:24:06.166 "name": "Nvme10", 00:24:06.166 "trtype": "tcp", 00:24:06.166 "traddr": "10.0.0.2", 00:24:06.166 "adrfam": "ipv4", 00:24:06.166 "trsvcid": "4420", 00:24:06.166 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:06.166 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:06.166 "hdgst": false, 00:24:06.166 "ddgst": false 00:24:06.166 }, 00:24:06.166 "method": "bdev_nvme_attach_controller" 00:24:06.166 }' 00:24:06.166 [2024-11-20 10:41:38.391589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.166 [2024-11-20 10:41:38.427662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.543 Running I/O for 10 seconds... 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.543 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:07.802 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.802 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:07.802 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:07.802 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:08.062 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2120162 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2120162 ']' 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2120162 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120162 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120162' 00:24:08.323 killing process with pid 2120162 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2120162 00:24:08.323 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2120162 00:24:08.323 Received shutdown signal, test time was about 0.977999 seconds 00:24:08.323 00:24:08.323 Latency(us) 00:24:08.323 [2024-11-20T09:41:40.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.323 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme1n1 : 0.96 265.43 16.59 0.00 0.00 238339.63 14854.83 246415.36 00:24:08.323 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme2n1 : 0.94 203.84 12.74 0.00 0.00 303800.04 17367.04 232434.35 00:24:08.323 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme3n1 : 0.98 262.08 16.38 0.00 0.00 231916.80 15728.64 249910.61 00:24:08.323 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme4n1 : 0.96 271.12 16.95 0.00 0.00 218917.91 3003.73 246415.36 00:24:08.323 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme5n1 : 0.97 263.66 16.48 0.00 0.00 220862.29 32112.64 225443.84 00:24:08.323 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme6n1 : 0.95 201.13 12.57 0.00 0.00 282879.15 21189.97 258648.75 00:24:08.323 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme7n1 : 0.97 264.64 16.54 0.00 0.00 210692.48 18568.53 246415.36 00:24:08.323 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme8n1 : 0.97 262.77 16.42 0.00 0.00 207643.31 14636.37 242920.11 00:24:08.323 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme9n1 : 0.94 203.25 12.70 0.00 0.00 260845.23 30583.47 228939.09 00:24:08.323 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.323 Verification LBA range: start 0x0 length 0x400 00:24:08.323 Nvme10n1 : 0.96 199.57 12.47 0.00 0.00 260560.50 25340.59 274377.39 00:24:08.323 [2024-11-20T09:41:40.699Z] =================================================================================================================== 00:24:08.323 [2024-11-20T09:41:40.699Z] Total : 2397.48 149.84 0.00 0.00 239900.92 3003.73 274377.39 00:24:08.583 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2119779 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.520 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.520 rmmod nvme_tcp 00:24:09.520 rmmod nvme_fabrics 00:24:09.520 rmmod nvme_keyring 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2119779 ']' 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2119779 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2119779 ']' 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2119779 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119779 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119779' 00:24:09.780 killing process with pid 2119779 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2119779 00:24:09.780 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2119779 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.040 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.950 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.950 00:24:11.950 real 0m7.814s 00:24:11.950 user 0m23.474s 00:24:11.950 sys 0m1.270s 00:24:11.950 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.950 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:11.950 ************************************ 00:24:11.950 END TEST nvmf_shutdown_tc2 00:24:11.950 ************************************ 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 ************************************ 00:24:12.211 START TEST nvmf_shutdown_tc3 00:24:12.211 ************************************ 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:12.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:12.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:12.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:12.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.211 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.212 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.212 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.471 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.471 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.471 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.471 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:24:12.471 00:24:12.471 --- 10.0.0.2 ping statistics --- 00:24:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.471 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:24:12.471 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:24:12.471 00:24:12.471 --- 10.0.0.1 ping statistics --- 00:24:12.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.472 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2121620 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2121620 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2121620 ']' 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.472 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:12.472 [2024-11-20 10:41:44.803733] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:12.472 [2024-11-20 10:41:44.803790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.731 [2024-11-20 10:41:44.894673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.731 [2024-11-20 10:41:44.926002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.731 [2024-11-20 10:41:44.926035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.731 [2024-11-20 10:41:44.926040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.731 [2024-11-20 10:41:44.926045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.731 [2024-11-20 10:41:44.926050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.731 [2024-11-20 10:41:44.927241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.731 [2024-11-20 10:41:44.927498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.731 [2024-11-20 10:41:44.927647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.731 [2024-11-20 10:41:44.927649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.302 [2024-11-20 10:41:45.640343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.302 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.563 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.563 Malloc1 00:24:13.563 [2024-11-20 10:41:45.750709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.563 Malloc2 00:24:13.563 Malloc3 00:24:13.563 Malloc4 00:24:13.563 Malloc5 00:24:13.563 Malloc6 00:24:13.824 Malloc7 00:24:13.824 Malloc8 00:24:13.824 Malloc9 00:24:13.824 Malloc10 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2121885 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2121885 /var/tmp/bdevperf.sock 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2121885 ']' 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.824 { 00:24:13.824 "params": { 00:24:13.824 "name": "Nvme$subsystem", 00:24:13.824 "trtype": "$TEST_TRANSPORT", 00:24:13.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.824 "adrfam": "ipv4", 00:24:13.824 "trsvcid": "$NVMF_PORT", 00:24:13.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.824 "hdgst": ${hdgst:-false}, 00:24:13.824 "ddgst": ${ddgst:-false} 00:24:13.824 }, 00:24:13.824 "method": "bdev_nvme_attach_controller" 00:24:13.824 } 00:24:13.824 EOF 00:24:13.824 )") 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.824 { 00:24:13.824 "params": { 00:24:13.824 "name": "Nvme$subsystem", 00:24:13.824 "trtype": "$TEST_TRANSPORT", 00:24:13.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.824 "adrfam": "ipv4", 00:24:13.824 "trsvcid": "$NVMF_PORT", 00:24:13.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.824 "hdgst": ${hdgst:-false}, 00:24:13.824 "ddgst": ${ddgst:-false} 00:24:13.824 }, 00:24:13.824 "method": "bdev_nvme_attach_controller" 00:24:13.824 } 00:24:13.824 EOF 00:24:13.824 )") 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.824 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.824 { 00:24:13.824 "params": { 00:24:13.824 "name": "Nvme$subsystem", 00:24:13.824 "trtype": "$TEST_TRANSPORT", 00:24:13.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.824 "adrfam": "ipv4", 00:24:13.824 "trsvcid": "$NVMF_PORT", 00:24:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.825 "hdgst": ${hdgst:-false}, 00:24:13.825 "ddgst": ${ddgst:-false} 00:24:13.825 }, 00:24:13.825 "method": "bdev_nvme_attach_controller" 00:24:13.825 } 00:24:13.825 EOF 00:24:13.825 )") 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.825 { 00:24:13.825 "params": { 00:24:13.825 "name": "Nvme$subsystem", 00:24:13.825 "trtype": "$TEST_TRANSPORT", 00:24:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.825 "adrfam": "ipv4", 00:24:13.825 "trsvcid": "$NVMF_PORT", 00:24:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.825 "hdgst": ${hdgst:-false}, 00:24:13.825 "ddgst": ${ddgst:-false} 00:24:13.825 }, 00:24:13.825 "method": "bdev_nvme_attach_controller" 00:24:13.825 } 00:24:13.825 EOF 00:24:13.825 )") 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.825 { 00:24:13.825 "params": { 00:24:13.825 "name": "Nvme$subsystem", 00:24:13.825 "trtype": "$TEST_TRANSPORT", 00:24:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.825 "adrfam": "ipv4", 00:24:13.825 "trsvcid": "$NVMF_PORT", 00:24:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.825 "hdgst": ${hdgst:-false}, 00:24:13.825 "ddgst": ${ddgst:-false} 00:24:13.825 }, 00:24:13.825 "method": "bdev_nvme_attach_controller" 00:24:13.825 } 00:24:13.825 EOF 00:24:13.825 )") 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.825 { 00:24:13.825 "params": { 00:24:13.825 "name": "Nvme$subsystem", 00:24:13.825 "trtype": "$TEST_TRANSPORT", 00:24:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.825 "adrfam": "ipv4", 00:24:13.825 "trsvcid": "$NVMF_PORT", 00:24:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.825 "hdgst": ${hdgst:-false}, 00:24:13.825 "ddgst": ${ddgst:-false} 00:24:13.825 }, 00:24:13.825 "method": "bdev_nvme_attach_controller" 00:24:13.825 } 00:24:13.825 EOF 00:24:13.825 )") 00:24:13.825 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:14.087 [2024-11-20 10:41:46.198558] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:14.087 [2024-11-20 10:41:46.198610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121885 ] 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.087 { 00:24:14.087 "params": { 00:24:14.087 "name": "Nvme$subsystem", 00:24:14.087 "trtype": "$TEST_TRANSPORT", 00:24:14.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.087 "adrfam": "ipv4", 00:24:14.087 "trsvcid": "$NVMF_PORT", 00:24:14.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.087 "hdgst": ${hdgst:-false}, 00:24:14.087 "ddgst": ${ddgst:-false} 00:24:14.087 }, 00:24:14.087 "method": "bdev_nvme_attach_controller" 00:24:14.087 } 00:24:14.087 EOF 00:24:14.087 )") 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.087 { 00:24:14.087 "params": { 00:24:14.087 "name": "Nvme$subsystem", 00:24:14.087 "trtype": "$TEST_TRANSPORT", 00:24:14.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.087 "adrfam": "ipv4", 00:24:14.087 "trsvcid": "$NVMF_PORT", 00:24:14.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.087 "hdgst": ${hdgst:-false}, 00:24:14.087 "ddgst": ${ddgst:-false} 00:24:14.087 }, 00:24:14.087 "method": "bdev_nvme_attach_controller" 00:24:14.087 } 00:24:14.087 EOF 00:24:14.087 )") 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.087 { 00:24:14.087 "params": { 00:24:14.087 "name": "Nvme$subsystem", 00:24:14.087 "trtype": "$TEST_TRANSPORT", 00:24:14.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.087 "adrfam": "ipv4", 00:24:14.087 "trsvcid": "$NVMF_PORT", 00:24:14.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.087 "hdgst": ${hdgst:-false}, 00:24:14.087 "ddgst": ${ddgst:-false} 00:24:14.087 }, 00:24:14.087 "method": "bdev_nvme_attach_controller" 00:24:14.087 } 00:24:14.087 EOF 00:24:14.087 )") 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.087 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.087 { 00:24:14.087 "params": { 00:24:14.087 "name": "Nvme$subsystem", 00:24:14.087 "trtype": "$TEST_TRANSPORT", 00:24:14.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.087 "adrfam": "ipv4", 00:24:14.087 "trsvcid": "$NVMF_PORT", 00:24:14.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.087 "hdgst": ${hdgst:-false}, 00:24:14.087 "ddgst": ${ddgst:-false} 00:24:14.087 }, 00:24:14.087 "method": "bdev_nvme_attach_controller" 00:24:14.087 } 00:24:14.087 EOF 00:24:14.087 )") 00:24:14.088 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:14.088 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:14.088 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:14.088 10:41:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme1", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme2", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme3", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme4", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme5", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme6", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme7", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme8", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme9", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 },{ 00:24:14.088 "params": { 00:24:14.088 "name": "Nvme10", 00:24:14.088 "trtype": "tcp", 00:24:14.088 "traddr": "10.0.0.2", 00:24:14.088 "adrfam": "ipv4", 00:24:14.088 "trsvcid": "4420", 00:24:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:14.088 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:14.088 "hdgst": false, 00:24:14.088 "ddgst": false 00:24:14.088 }, 00:24:14.088 "method": "bdev_nvme_attach_controller" 00:24:14.088 }' 00:24:14.088 [2024-11-20 10:41:46.288031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.088 [2024-11-20 10:41:46.324192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.999 Running I/O for 10 seconds... 00:24:15.999 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.999 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:15.999 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:15.999 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.999 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:15.999 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:16.259 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2121620 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2121620 ']' 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2121620 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.519 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121620 00:24:16.791 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:16.792 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:16.792 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121620' 00:24:16.792 killing process with pid 2121620 00:24:16.792 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2121620 00:24:16.792 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2121620 00:24:16.792 [2024-11-20 10:41:48.895468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.895842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334110 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.896988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.792 [2024-11-20 10:41:48.897089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.897320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336ce0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.900297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334600 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.901089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334ad0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.901111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334ad0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.901117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334ad0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.793 [2024-11-20 10:41:48.902205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.902416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334fc0 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.794 [2024-11-20 10:41:48.903808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.903921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335960 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.904996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.795 [2024-11-20 10:41:48.905156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335e30 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.905999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2336320 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.796 [2024-11-20 10:41:48.906619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.906872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23367f0 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.914803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dd310 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.914937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.914987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.914999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.915006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120ad00 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.915033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.915042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.915050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.915058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.915066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.915073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.915082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.915089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.797 [2024-11-20 10:41:48.915096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae610 is same with the state(6) to be set 00:24:16.797 [2024-11-20 10:41:48.915123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.797 [2024-11-20 10:41:48.915132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd93420 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cb0 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d9f0 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8f20 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94810 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c2180 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.915662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.798 [2024-11-20 10:41:48.915719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.915726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8bfa0 is same with the state(6) to be set 00:24:16.798 [2024-11-20 10:41:48.916311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.798 [2024-11-20 10:41:48.916474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.798 [2024-11-20 10:41:48.916481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.916993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.799 [2024-11-20 10:41:48.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.799 [2024-11-20 10:41:48.917154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.800 [2024-11-20 10:41:48.917595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.800 [2024-11-20 10:41:48.917923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.800 [2024-11-20 10:41:48.917932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.917940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.917950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.917957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.917967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.917974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.917984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.917991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.918200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.918209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.924908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.924950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.924968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.924985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.924992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.801 [2024-11-20 10:41:48.925304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.801 [2024-11-20 10:41:48.925314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.925411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.925795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dd310 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120ad00 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcae610 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd93420 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd96cb0 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d9f0 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8f20 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd94810 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c2180 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.925948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8bfa0 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.928693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:16.802 [2024-11-20 10:41:48.929078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:16.802 [2024-11-20 10:41:48.929668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.802 [2024-11-20 10:41:48.929707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd94810 with addr=10.0.0.2, port=4420 00:24:16.802 [2024-11-20 10:41:48.929721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94810 is same with the state(6) to be set 00:24:16.802 [2024-11-20 10:41:48.929781] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930461] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930509] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930545] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930582] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930619] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.802 [2024-11-20 10:41:48.930863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.802 [2024-11-20 10:41:48.930879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c2180 with addr=10.0.0.2, port=4420 00:24:16.802 [2024-11-20 10:41:48.930887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c2180 is same with the state(6) to be set 00:24:16.802 [2024-11-20 10:41:48.930899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd94810 (9): Bad file descriptor 00:24:16.802 [2024-11-20 10:41:48.930966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.930999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.802 [2024-11-20 10:41:48.931373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.802 [2024-11-20 10:41:48.931383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.931983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.931992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.932000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.932009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.932017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.932033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.803 [2024-11-20 10:41:48.932042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.803 [2024-11-20 10:41:48.932051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.932061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.932068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.932076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9aaa0 is same with the state(6) to be set 00:24:16.804 [2024-11-20 10:41:48.932176] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:16.804 [2024-11-20 10:41:48.932253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c2180 (9): Bad file descriptor 00:24:16.804 [2024-11-20 10:41:48.932265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:16.804 [2024-11-20 10:41:48.932273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:16.804 [2024-11-20 10:41:48.932282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:16.804 [2024-11-20 10:41:48.932290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:16.804 [2024-11-20 10:41:48.933574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:16.804 [2024-11-20 10:41:48.933603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:16.804 [2024-11-20 10:41:48.933613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:16.804 [2024-11-20 10:41:48.933622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:16.804 [2024-11-20 10:41:48.933630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:16.804 [2024-11-20 10:41:48.933876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.804 [2024-11-20 10:41:48.933890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8d9f0 with addr=10.0.0.2, port=4420 00:24:16.804 [2024-11-20 10:41:48.933898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d9f0 is same with the state(6) to be set 00:24:16.804 [2024-11-20 10:41:48.934202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d9f0 (9): Bad file descriptor 00:24:16.804 [2024-11-20 10:41:48.934250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:16.804 [2024-11-20 10:41:48.934257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:16.804 [2024-11-20 10:41:48.934265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:16.804 [2024-11-20 10:41:48.934272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:16.804 [2024-11-20 10:41:48.935904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.935918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.935930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.935947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.935958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.935968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.935975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.935985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.935992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.804 [2024-11-20 10:41:48.936406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.804 [2024-11-20 10:41:48.936416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.936991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.805 [2024-11-20 10:41:48.937007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12847d0 is same with the state(6) to be set 00:24:16.805 [2024-11-20 10:41:48.938292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.805 [2024-11-20 10:41:48.938306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.806 [2024-11-20 10:41:48.938982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.806 [2024-11-20 10:41:48.938991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.939395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.939403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1196fa0 is same with the state(6) to be set 00:24:16.807 [2024-11-20 10:41:48.940675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.807 [2024-11-20 10:41:48.940945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.807 [2024-11-20 10:41:48.940954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.940961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.940971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.940978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.940988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.808 [2024-11-20 10:41:48.941619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.808 [2024-11-20 10:41:48.941626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.941781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.941789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199a00 is same with the state(6) to be set 00:24:16.809 [2024-11-20 10:41:48.943071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.809 [2024-11-20 10:41:48.943499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.809 [2024-11-20 10:41:48.943507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.943990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.943997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.810 [2024-11-20 10:41:48.944171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.810 [2024-11-20 10:41:48.944181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.944188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.944196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119afc0 is same with the state(6) to be set 00:24:16.811 [2024-11-20 10:41:48.945468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.945985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.945993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.811 [2024-11-20 10:41:48.946130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.811 [2024-11-20 10:41:48.946140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.946578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.946586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c4f0 is same with the state(6) to be set 00:24:16.812 [2024-11-20 10:41:48.947864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.947988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.947996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.812 [2024-11-20 10:41:48.948098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.812 [2024-11-20 10:41:48.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.813 [2024-11-20 10:41:48.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.813 [2024-11-20 10:41:48.948718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.948973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.948981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119da20 is same with the state(6) to be set 00:24:16.814 [2024-11-20 10:41:48.950248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.814 [2024-11-20 10:41:48.950665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.814 [2024-11-20 10:41:48.950673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.950988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.950995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.815 [2024-11-20 10:41:48.951342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.815 [2024-11-20 10:41:48.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.816 [2024-11-20 10:41:48.951359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.816 [2024-11-20 10:41:48.951368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd7320 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.953795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.953820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.953830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.953840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.953917] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.953935] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.953952] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.954028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.954038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:16.816 task offset: 29952 on job bdev=Nvme3n1 fails 00:24:16.816 00:24:16.816 Latency(us) 00:24:16.816 [2024-11-20T09:41:49.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.816 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme1n1 ended in about 0.98 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme1n1 : 0.98 130.94 8.18 65.47 0.00 322220.66 18022.40 258648.75 00:24:16.816 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme2n1 ended in about 0.97 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme2n1 : 0.97 201.47 12.59 65.79 0.00 232078.94 17476.27 248162.99 00:24:16.816 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme3n1 ended in about 0.97 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme3n1 : 0.97 198.63 12.41 66.21 0.00 229415.89 10103.47 249910.61 00:24:16.816 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme4n1 ended in about 0.98 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme4n1 : 0.98 195.93 12.25 65.31 0.00 227994.77 12014.93 256901.12 00:24:16.816 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme5n1 ended in about 0.97 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme5n1 : 0.97 198.38 12.40 66.13 0.00 220209.92 12997.97 253405.87 00:24:16.816 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme6n1 ended in about 0.98 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme6n1 : 0.98 199.53 12.47 65.15 0.00 215753.33 17367.04 246415.36 00:24:16.816 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme7n1 ended in about 0.98 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme7n1 : 0.98 194.98 12.19 64.99 0.00 215038.40 12178.77 256901.12 00:24:16.816 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme8n1 ended in about 0.99 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme8n1 : 0.99 194.51 12.16 64.84 0.00 210932.05 36044.80 251658.24 00:24:16.816 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme9n1 ended in about 0.99 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme9n1 : 0.99 134.41 8.40 64.68 0.00 268743.06 19442.35 274377.39 00:24:16.816 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.816 Job: Nvme10n1 ended in about 0.99 seconds with error 00:24:16.816 Verification LBA range: start 0x0 length 0x400 00:24:16.816 Nvme10n1 : 0.99 133.08 8.32 64.52 0.00 264811.14 20534.61 251658.24 00:24:16.816 [2024-11-20T09:41:49.192Z] =================================================================================================================== 00:24:16.816 [2024-11-20T09:41:49.192Z] Total : 1781.84 111.37 653.08 0.00 237177.12 10103.47 274377.39 00:24:16.816 [2024-11-20 10:41:48.981332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:16.816 [2024-11-20 10:41:48.981379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.981846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.981867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd96cb0 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.981878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96cb0 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.982212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.982223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd93420 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.982231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd93420 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.982594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.982604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcae610 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.982612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcae610 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.982935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.982945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e8f20 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.982952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8f20 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.984832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.984847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.985204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.985219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8bfa0 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.985226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8bfa0 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.985563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.985573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dd310 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.985580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dd310 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.985629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.985639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120ad00 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.985646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120ad00 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.985658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd96cb0 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.985671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd93420 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.985681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcae610 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.985691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8f20 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.985726] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.985741] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.985756] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.985769] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.985781] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:16.816 [2024-11-20 10:41:48.985851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:16.816 [2024-11-20 10:41:48.986044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.986058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd94810 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.986066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd94810 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.986270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.816 [2024-11-20 10:41:48.986283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c2180 with addr=10.0.0.2, port=4420 00:24:16.816 [2024-11-20 10:41:48.986291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c2180 is same with the state(6) to be set 00:24:16.816 [2024-11-20 10:41:48.986300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8bfa0 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.986310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dd310 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.986319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120ad00 (9): Bad file descriptor 00:24:16.816 [2024-11-20 10:41:48.986328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:16.816 [2024-11-20 10:41:48.986336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:16.816 [2024-11-20 10:41:48.986345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:16.816 [2024-11-20 10:41:48.986354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.817 [2024-11-20 10:41:48.986846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8d9f0 with addr=10.0.0.2, port=4420 00:24:16.817 [2024-11-20 10:41:48.986854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d9f0 is same with the state(6) to be set 00:24:16.817 [2024-11-20 10:41:48.986862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd94810 (9): Bad file descriptor 00:24:16.817 [2024-11-20 10:41:48.986871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c2180 (9): Bad file descriptor 00:24:16.817 [2024-11-20 10:41:48.986880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.986942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.986949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.986955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.986985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8d9f0 (9): Bad file descriptor 00:24:16.817 [2024-11-20 10:41:48.986995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.987001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.987008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.987014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.987022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.987029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.987036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.987042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:16.817 [2024-11-20 10:41:48.987069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:16.817 [2024-11-20 10:41:48.987076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:16.817 [2024-11-20 10:41:48.987086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:16.817 [2024-11-20 10:41:48.987093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:16.817 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2121885 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2121885 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.772 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2121885 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.033 rmmod nvme_tcp 00:24:18.033 rmmod nvme_fabrics 00:24:18.033 rmmod nvme_keyring 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2121620 ']' 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2121620 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2121620 ']' 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2121620 00:24:18.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2121620) - No such process 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2121620 is not found' 00:24:18.033 Process with pid 2121620 is not found 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.033 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.946 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.946 00:24:19.946 real 0m7.941s 00:24:19.946 user 0m19.935s 00:24:19.946 sys 0m1.291s 00:24:19.946 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.946 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:19.946 ************************************ 00:24:19.946 END TEST nvmf_shutdown_tc3 00:24:19.946 ************************************ 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:20.207 ************************************ 00:24:20.207 START TEST nvmf_shutdown_tc4 00:24:20.207 ************************************ 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.207 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:20.208 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:20.208 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:20.208 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:20.208 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.208 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:24:20.469 00:24:20.469 --- 10.0.0.2 ping statistics --- 00:24:20.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.469 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:24:20.469 00:24:20.469 --- 10.0.0.1 ping statistics --- 00:24:20.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.469 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2123152 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2123152 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2123152 ']' 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.469 10:41:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:20.469 [2024-11-20 10:41:52.830040] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:20.469 [2024-11-20 10:41:52.830107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.730 [2024-11-20 10:41:52.928616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.730 [2024-11-20 10:41:52.962645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.730 [2024-11-20 10:41:52.962676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.730 [2024-11-20 10:41:52.962682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.730 [2024-11-20 10:41:52.962687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.730 [2024-11-20 10:41:52.962691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.730 [2024-11-20 10:41:52.964266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.730 [2024-11-20 10:41:52.964571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.730 [2024-11-20 10:41:52.964692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.730 [2024-11-20 10:41:52.964692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:21.300 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.300 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:21.300 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.301 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:21.561 [2024-11-20 10:41:53.675337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.561 10:41:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 Malloc1 00:24:21.562 [2024-11-20 10:41:53.790702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.562 Malloc2 00:24:21.562 Malloc3 00:24:21.562 Malloc4 00:24:21.562 Malloc5 00:24:21.822 Malloc6 00:24:21.822 Malloc7 00:24:21.822 Malloc8 00:24:21.822 Malloc9 00:24:21.822 Malloc10 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2123528 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:21.822 10:41:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:22.082 [2024-11-20 10:41:54.268815] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2123152 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2123152 ']' 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2123152 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123152 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123152' 00:24:27.433 killing process with pid 2123152 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2123152 00:24:27.433 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2123152 00:24:27.433 [2024-11-20 10:41:59.263899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.263978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138330 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.264029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138800 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.264060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138800 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.264066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138800 is same with the state(6) to be set 00:24:27.433 [2024-11-20 10:41:59.264071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138800 is same with the state(6) to be set 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 [2024-11-20 10:41:59.264548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137e60 is same with the state(6) to be set 00:24:27.433 starting I/O failed: -6 00:24:27.433 [2024-11-20 10:41:59.264572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137e60 is same with the state(6) to be set 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 starting I/O failed: -6 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.433 starting I/O failed: -6 00:24:27.433 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 [2024-11-20 10:41:59.265781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 [2024-11-20 10:41:59.266971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137480 is same with the state(6) to be set 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137480 is same with the state(6) to be set 00:24:27.434 [2024-11-20 10:41:59.268196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137480 is same with the state(6) to be set 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 [2024-11-20 10:41:59.268412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with Write completed with error (sct=0, sc=8) 00:24:27.434 the state(6) to be set 00:24:27.434 [2024-11-20 10:41:59.268419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 [2024-11-20 10:41:59.268429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 [2024-11-20 10:41:59.268435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 starting I/O failed: -6 00:24:27.434 [2024-11-20 10:41:59.268439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137970 is same with the state(6) to be set 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.434 starting I/O failed: -6 00:24:27.434 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 [2024-11-20 10:41:59.269103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.435 NVMe io qpair process completion error 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 [2024-11-20 10:41:59.270306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 [2024-11-20 10:41:59.271166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 starting I/O failed: -6 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.435 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 [2024-11-20 10:41:59.272043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 [2024-11-20 10:41:59.273583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.436 NVMe io qpair process completion error 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 starting I/O failed: -6 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 [2024-11-20 10:41:59.274748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.436 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 [2024-11-20 10:41:59.275648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 [2024-11-20 10:41:59.276553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.437 Write completed with error (sct=0, sc=8) 00:24:27.437 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 [2024-11-20 10:41:59.280268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.438 NVMe io qpair process completion error 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 [2024-11-20 10:41:59.281532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 [2024-11-20 10:41:59.282471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.438 Write completed with error (sct=0, sc=8) 00:24:27.438 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 [2024-11-20 10:41:59.283373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 [2024-11-20 10:41:59.285285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.439 NVMe io qpair process completion error 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.439 starting I/O failed: -6 00:24:27.439 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 [2024-11-20 10:41:59.286619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 [2024-11-20 10:41:59.287441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 [2024-11-20 10:41:59.288365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.440 starting I/O failed: -6 00:24:27.440 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 [2024-11-20 10:41:59.289994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.441 NVMe io qpair process completion error 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 [2024-11-20 10:41:59.291117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.441 starting I/O failed: -6 00:24:27.441 starting I/O failed: -6 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 starting I/O failed: -6 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.441 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 [2024-11-20 10:41:59.292056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 [2024-11-20 10:41:59.292958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.442 Write completed with error (sct=0, sc=8) 00:24:27.442 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 [2024-11-20 10:41:59.295299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.443 NVMe io qpair process completion error 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 [2024-11-20 10:41:59.296951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 starting I/O failed: -6 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.443 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 [2024-11-20 10:41:59.298489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.444 starting I/O failed: -6 00:24:27.444 starting I/O failed: -6 00:24:27.444 starting I/O failed: -6 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 starting I/O failed: -6 00:24:27.444 [2024-11-20 10:41:59.300693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.444 NVMe io qpair process completion error 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.444 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 [2024-11-20 10:41:59.301734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 [2024-11-20 10:41:59.302569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 [2024-11-20 10:41:59.303511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.445 Write completed with error (sct=0, sc=8) 00:24:27.445 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 [2024-11-20 10:41:59.305728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.446 NVMe io qpair process completion error 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 starting I/O failed: -6 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.446 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 [2024-11-20 10:41:59.306864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 [2024-11-20 10:41:59.307697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 [2024-11-20 10:41:59.308607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.447 starting I/O failed: -6 00:24:27.447 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 [2024-11-20 10:41:59.310446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.448 NVMe io qpair process completion error 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 [2024-11-20 10:41:59.311708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.448 Write completed with error (sct=0, sc=8) 00:24:27.448 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 [2024-11-20 10:41:59.312531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 [2024-11-20 10:41:59.313457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.449 starting I/O failed: -6 00:24:27.449 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 Write completed with error (sct=0, sc=8) 00:24:27.450 starting I/O failed: -6 00:24:27.450 [2024-11-20 10:41:59.316312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:27.450 NVMe io qpair process completion error 00:24:27.450 Initializing NVMe Controllers 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:27.450 Controller IO queue size 128, less than required. 00:24:27.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:27.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:27.450 Initialization complete. Launching workers. 00:24:27.450 ======================================================== 00:24:27.450 Latency(us) 00:24:27.450 Device Information : IOPS MiB/s Average min max 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1858.80 79.87 68877.68 765.33 121384.56 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1864.45 80.11 68699.84 833.95 156804.77 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1885.38 81.01 67959.80 717.70 134229.21 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1876.38 80.63 67584.79 549.07 119644.09 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1867.38 80.24 67933.41 851.57 119001.51 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1806.70 77.63 70242.48 917.90 123972.60 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1856.71 79.78 68397.38 915.49 124274.44 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1871.99 80.44 67867.60 660.64 126286.16 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1863.41 80.07 68201.40 694.49 126779.32 00:24:27.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1868.64 80.29 68048.42 618.95 128296.32 00:24:27.450 ======================================================== 00:24:27.450 Total : 18619.83 800.07 68374.02 549.07 156804.77 00:24:27.450 00:24:27.450 [2024-11-20 10:41:59.321056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1379a70 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1379410 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378bc0 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137a900 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137a720 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137aae0 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378890 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1379740 is same with the state(6) to be set 00:24:27.450 [2024-11-20 10:41:59.321338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378560 is same with the state(6) to be set 00:24:27.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:27.450 10:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2123528 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2123528 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2123528 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.390 rmmod nvme_tcp 00:24:28.390 rmmod nvme_fabrics 00:24:28.390 rmmod nvme_keyring 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2123152 ']' 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2123152 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2123152 ']' 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2123152 00:24:28.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2123152) - No such process 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2123152 is not found' 00:24:28.390 Process with pid 2123152 is not found 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.390 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.298 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.298 00:24:30.298 real 0m10.269s 00:24:30.298 user 0m27.922s 00:24:30.298 sys 0m4.075s 00:24:30.298 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.298 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:30.298 ************************************ 00:24:30.298 END TEST nvmf_shutdown_tc4 00:24:30.298 ************************************ 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:30.558 00:24:30.558 real 0m43.372s 00:24:30.558 user 1m45.105s 00:24:30.558 sys 0m13.990s 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:30.558 ************************************ 00:24:30.558 END TEST nvmf_shutdown 00:24:30.558 ************************************ 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:30.558 ************************************ 00:24:30.558 START TEST nvmf_nsid 00:24:30.558 ************************************ 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:30.558 * Looking for test storage... 00:24:30.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.558 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.819 10:42:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:30.819 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:30.819 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.820 --rc genhtml_branch_coverage=1 00:24:30.820 --rc genhtml_function_coverage=1 00:24:30.820 --rc genhtml_legend=1 00:24:30.820 --rc geninfo_all_blocks=1 00:24:30.820 --rc geninfo_unexecuted_blocks=1 00:24:30.820 00:24:30.820 ' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.820 --rc genhtml_branch_coverage=1 00:24:30.820 --rc genhtml_function_coverage=1 00:24:30.820 --rc genhtml_legend=1 00:24:30.820 --rc geninfo_all_blocks=1 00:24:30.820 --rc geninfo_unexecuted_blocks=1 00:24:30.820 00:24:30.820 ' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.820 --rc genhtml_branch_coverage=1 00:24:30.820 --rc genhtml_function_coverage=1 00:24:30.820 --rc genhtml_legend=1 00:24:30.820 --rc geninfo_all_blocks=1 00:24:30.820 --rc geninfo_unexecuted_blocks=1 00:24:30.820 00:24:30.820 ' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.820 --rc genhtml_branch_coverage=1 00:24:30.820 --rc genhtml_function_coverage=1 00:24:30.820 --rc genhtml_legend=1 00:24:30.820 --rc geninfo_all_blocks=1 00:24:30.820 --rc geninfo_unexecuted_blocks=1 00:24:30.820 00:24:30.820 ' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.820 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.821 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.821 10:42:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:24:38.955 00:24:38.955 --- 10.0.0.2 ping statistics --- 00:24:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.955 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:38.955 00:24:38.955 --- 10.0.0.1 ping statistics --- 00:24:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.955 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2129354 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2129354 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2129354 ']' 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.955 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:38.955 [2024-11-20 10:42:10.662178] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:38.955 [2024-11-20 10:42:10.662250] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.955 [2024-11-20 10:42:10.761820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.955 [2024-11-20 10:42:10.813613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.955 [2024-11-20 10:42:10.813671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.955 [2024-11-20 10:42:10.813680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.955 [2024-11-20 10:42:10.813687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.955 [2024-11-20 10:42:10.813693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.955 [2024-11-20 10:42:10.814483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2129793 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=db4db7ad-84b6-4b72-8a88-953e7eb485b4 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6a3de813-3abc-4147-9aed-fe3d14c25431 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e1bbea1c-e3a4-4204-9832-f57e3d63de3d 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.216 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:39.216 null0 00:24:39.216 null1 00:24:39.216 [2024-11-20 10:42:11.576182] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:39.216 [2024-11-20 10:42:11.576250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129793 ] 00:24:39.216 null2 00:24:39.216 [2024-11-20 10:42:11.581631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.476 [2024-11-20 10:42:11.605933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2129793 /var/tmp/tgt2.sock 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2129793 ']' 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:39.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.477 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:39.477 [2024-11-20 10:42:11.666684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.477 [2024-11-20 10:42:11.719583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.737 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.738 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:39.738 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:39.997 [2024-11-20 10:42:12.270418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.997 [2024-11-20 10:42:12.286611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:39.997 nvme0n1 nvme0n2 00:24:39.997 nvme1n1 00:24:39.997 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:39.997 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:39.998 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:41.909 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid db4db7ad-84b6-4b72-8a88-953e7eb485b4 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:42.481 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=db4db7ad84b64b728a88953e7eb485b4 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DB4DB7AD84B64B728A88953E7EB485B4 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DB4DB7AD84B64B728A88953E7EB485B4 == \D\B\4\D\B\7\A\D\8\4\B\6\4\B\7\2\8\A\8\8\9\5\3\E\7\E\B\4\8\5\B\4 ]] 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6a3de813-3abc-4147-9aed-fe3d14c25431 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6a3de8133abc41479aedfe3d14c25431 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6A3DE8133ABC41479AEDFE3D14C25431 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6A3DE8133ABC41479AEDFE3D14C25431 == \6\A\3\D\E\8\1\3\3\A\B\C\4\1\4\7\9\A\E\D\F\E\3\D\1\4\C\2\5\4\3\1 ]] 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:42.741 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e1bbea1c-e3a4-4204-9832-f57e3d63de3d 00:24:42.742 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:42.742 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:42.742 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:42.742 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:42.742 10:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:42.742 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e1bbea1ce3a442049832f57e3d63de3d 00:24:42.742 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E1BBEA1CE3A442049832F57E3D63DE3D 00:24:42.742 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E1BBEA1CE3A442049832F57E3D63DE3D == \E\1\B\B\E\A\1\C\E\3\A\4\4\2\0\4\9\8\3\2\F\5\7\E\3\D\6\3\D\E\3\D ]] 00:24:42.742 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2129793 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2129793 ']' 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2129793 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2129793 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2129793' 00:24:43.001 killing process with pid 2129793 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2129793 00:24:43.001 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2129793 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.261 rmmod nvme_tcp 00:24:43.261 rmmod nvme_fabrics 00:24:43.261 rmmod nvme_keyring 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2129354 ']' 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2129354 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2129354 ']' 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2129354 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2129354 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2129354' 00:24:43.261 killing process with pid 2129354 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2129354 00:24:43.261 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2129354 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.521 10:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.433 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.433 00:24:45.433 real 0m14.999s 00:24:45.433 user 0m11.424s 00:24:45.433 sys 0m6.938s 00:24:45.433 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.433 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:45.433 ************************************ 00:24:45.433 END TEST nvmf_nsid 00:24:45.433 ************************************ 00:24:45.693 10:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:45.693 00:24:45.693 real 13m3.887s 00:24:45.693 user 27m13.237s 00:24:45.693 sys 3m58.410s 00:24:45.693 10:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.693 10:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.693 ************************************ 00:24:45.693 END TEST nvmf_target_extra 00:24:45.693 ************************************ 00:24:45.693 10:42:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:45.693 10:42:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.693 10:42:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.693 10:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.693 ************************************ 00:24:45.693 START TEST nvmf_host 00:24:45.693 ************************************ 00:24:45.693 10:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:45.693 * Looking for test storage... 00:24:45.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:45.693 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.693 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.693 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.954 --rc genhtml_branch_coverage=1 00:24:45.954 --rc genhtml_function_coverage=1 00:24:45.954 --rc genhtml_legend=1 00:24:45.954 --rc geninfo_all_blocks=1 00:24:45.954 --rc geninfo_unexecuted_blocks=1 00:24:45.954 00:24:45.954 ' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.954 --rc genhtml_branch_coverage=1 00:24:45.954 --rc genhtml_function_coverage=1 00:24:45.954 --rc genhtml_legend=1 00:24:45.954 --rc geninfo_all_blocks=1 00:24:45.954 --rc geninfo_unexecuted_blocks=1 00:24:45.954 00:24:45.954 ' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.954 --rc genhtml_branch_coverage=1 00:24:45.954 --rc genhtml_function_coverage=1 00:24:45.954 --rc genhtml_legend=1 00:24:45.954 --rc geninfo_all_blocks=1 00:24:45.954 --rc geninfo_unexecuted_blocks=1 00:24:45.954 00:24:45.954 ' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.954 --rc genhtml_branch_coverage=1 00:24:45.954 --rc genhtml_function_coverage=1 00:24:45.954 --rc genhtml_legend=1 00:24:45.954 --rc geninfo_all_blocks=1 00:24:45.954 --rc geninfo_unexecuted_blocks=1 00:24:45.954 00:24:45.954 ' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.954 10:42:18 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.955 ************************************ 00:24:45.955 START TEST nvmf_multicontroller 00:24:45.955 ************************************ 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:45.955 * Looking for test storage... 00:24:45.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.955 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.216 --rc genhtml_branch_coverage=1 00:24:46.216 --rc genhtml_function_coverage=1 00:24:46.216 --rc genhtml_legend=1 00:24:46.216 --rc geninfo_all_blocks=1 00:24:46.216 --rc geninfo_unexecuted_blocks=1 00:24:46.216 00:24:46.216 ' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.216 --rc genhtml_branch_coverage=1 00:24:46.216 --rc genhtml_function_coverage=1 00:24:46.216 --rc genhtml_legend=1 00:24:46.216 --rc geninfo_all_blocks=1 00:24:46.216 --rc geninfo_unexecuted_blocks=1 00:24:46.216 00:24:46.216 ' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.216 --rc genhtml_branch_coverage=1 00:24:46.216 --rc genhtml_function_coverage=1 00:24:46.216 --rc genhtml_legend=1 00:24:46.216 --rc geninfo_all_blocks=1 00:24:46.216 --rc geninfo_unexecuted_blocks=1 00:24:46.216 00:24:46.216 ' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:46.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.216 --rc genhtml_branch_coverage=1 00:24:46.216 --rc genhtml_function_coverage=1 00:24:46.216 --rc genhtml_legend=1 00:24:46.216 --rc geninfo_all_blocks=1 00:24:46.216 --rc geninfo_unexecuted_blocks=1 00:24:46.216 00:24:46.216 ' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.216 10:42:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:54.352 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:54.352 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:54.352 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:54.352 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.352 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:24:54.353 00:24:54.353 --- 10.0.0.2 ping statistics --- 00:24:54.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.353 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:54.353 00:24:54.353 --- 10.0.0.1 ping statistics --- 00:24:54.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.353 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2134904 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2134904 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2134904 ']' 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.353 10:42:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.353 [2024-11-20 10:42:25.991151] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:54.353 [2024-11-20 10:42:25.991226] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.353 [2024-11-20 10:42:26.089488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:54.353 [2024-11-20 10:42:26.141173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.353 [2024-11-20 10:42:26.141224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.353 [2024-11-20 10:42:26.141233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.353 [2024-11-20 10:42:26.141240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.353 [2024-11-20 10:42:26.141246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.353 [2024-11-20 10:42:26.143063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.353 [2024-11-20 10:42:26.143226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.353 [2024-11-20 10:42:26.143257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 [2024-11-20 10:42:26.866674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 Malloc0 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 [2024-11-20 10:42:26.937611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 [2024-11-20 10:42:26.949496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 Malloc1 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.614 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.875 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.875 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:54.875 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.875 10:42:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2134961 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:54.875 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2134961 /var/tmp/bdevperf.sock 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2134961 ']' 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.876 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.819 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.819 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:55.819 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:55.819 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.819 10:42:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.819 NVMe0n1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.819 1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.819 request: 00:24:55.819 { 00:24:55.819 "name": "NVMe0", 00:24:55.819 "trtype": "tcp", 00:24:55.819 "traddr": "10.0.0.2", 00:24:55.819 "adrfam": "ipv4", 00:24:55.819 "trsvcid": "4420", 00:24:55.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.819 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:55.819 "hostaddr": "10.0.0.1", 00:24:55.819 "prchk_reftag": false, 00:24:55.819 "prchk_guard": false, 00:24:55.819 "hdgst": false, 00:24:55.819 "ddgst": false, 00:24:55.819 "allow_unrecognized_csi": false, 00:24:55.819 "method": "bdev_nvme_attach_controller", 00:24:55.819 "req_id": 1 00:24:55.819 } 00:24:55.819 Got JSON-RPC error response 00:24:55.819 response: 00:24:55.819 { 00:24:55.819 "code": -114, 00:24:55.819 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:55.819 } 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.819 request: 00:24:55.819 { 00:24:55.819 "name": "NVMe0", 00:24:55.819 "trtype": "tcp", 00:24:55.819 "traddr": "10.0.0.2", 00:24:55.819 "adrfam": "ipv4", 00:24:55.819 "trsvcid": "4420", 00:24:55.819 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.819 "hostaddr": "10.0.0.1", 00:24:55.819 "prchk_reftag": false, 00:24:55.819 "prchk_guard": false, 00:24:55.819 "hdgst": false, 00:24:55.819 "ddgst": false, 00:24:55.819 "allow_unrecognized_csi": false, 00:24:55.819 "method": "bdev_nvme_attach_controller", 00:24:55.819 "req_id": 1 00:24:55.819 } 00:24:55.819 Got JSON-RPC error response 00:24:55.819 response: 00:24:55.819 { 00:24:55.819 "code": -114, 00:24:55.819 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:55.819 } 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.819 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 request: 00:24:56.081 { 00:24:56.081 "name": "NVMe0", 00:24:56.081 "trtype": "tcp", 00:24:56.081 "traddr": "10.0.0.2", 00:24:56.081 "adrfam": "ipv4", 00:24:56.081 "trsvcid": "4420", 00:24:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.081 "hostaddr": "10.0.0.1", 00:24:56.081 "prchk_reftag": false, 00:24:56.081 "prchk_guard": false, 00:24:56.081 "hdgst": false, 00:24:56.081 "ddgst": false, 00:24:56.081 "multipath": "disable", 00:24:56.081 "allow_unrecognized_csi": false, 00:24:56.081 "method": "bdev_nvme_attach_controller", 00:24:56.081 "req_id": 1 00:24:56.081 } 00:24:56.081 Got JSON-RPC error response 00:24:56.081 response: 00:24:56.081 { 00:24:56.081 "code": -114, 00:24:56.081 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:56.081 } 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 request: 00:24:56.081 { 00:24:56.081 "name": "NVMe0", 00:24:56.081 "trtype": "tcp", 00:24:56.081 "traddr": "10.0.0.2", 00:24:56.081 "adrfam": "ipv4", 00:24:56.081 "trsvcid": "4420", 00:24:56.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.081 "hostaddr": "10.0.0.1", 00:24:56.081 "prchk_reftag": false, 00:24:56.081 "prchk_guard": false, 00:24:56.081 "hdgst": false, 00:24:56.081 "ddgst": false, 00:24:56.081 "multipath": "failover", 00:24:56.081 "allow_unrecognized_csi": false, 00:24:56.081 "method": "bdev_nvme_attach_controller", 00:24:56.081 "req_id": 1 00:24:56.081 } 00:24:56.081 Got JSON-RPC error response 00:24:56.081 response: 00:24:56.081 { 00:24:56.081 "code": -114, 00:24:56.081 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:56.081 } 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 NVMe0n1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:56.081 10:42:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:57.462 { 00:24:57.462 "results": [ 00:24:57.462 { 00:24:57.462 "job": "NVMe0n1", 00:24:57.462 "core_mask": "0x1", 00:24:57.462 "workload": "write", 00:24:57.462 "status": "finished", 00:24:57.462 "queue_depth": 128, 00:24:57.462 "io_size": 4096, 00:24:57.462 "runtime": 1.005592, 00:24:57.462 "iops": 26892.616488595773, 00:24:57.462 "mibps": 105.04928315857724, 00:24:57.462 "io_failed": 0, 00:24:57.462 "io_timeout": 0, 00:24:57.462 "avg_latency_us": 4748.666492129818, 00:24:57.462 "min_latency_us": 2075.306666666667, 00:24:57.462 "max_latency_us": 11468.8 00:24:57.462 } 00:24:57.462 ], 00:24:57.462 "core_count": 1 00:24:57.462 } 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2134961 ']' 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2134961' 00:24:57.462 killing process with pid 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2134961 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:57.462 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:57.462 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:57.462 [2024-11-20 10:42:27.081543] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:24:57.463 [2024-11-20 10:42:27.081627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134961 ] 00:24:57.463 [2024-11-20 10:42:27.176314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.463 [2024-11-20 10:42:27.230341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.463 [2024-11-20 10:42:28.413433] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 0b8b2e42-0efa-4a6f-b55b-76bb60875e51 already exists 00:24:57.463 [2024-11-20 10:42:28.413478] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:0b8b2e42-0efa-4a6f-b55b-76bb60875e51 alias for bdev NVMe1n1 00:24:57.463 [2024-11-20 10:42:28.413488] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:57.463 Running I/O for 1 seconds... 00:24:57.463 26850.00 IOPS, 104.88 MiB/s 00:24:57.463 Latency(us) 00:24:57.463 [2024-11-20T09:42:29.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.463 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:57.463 NVMe0n1 : 1.01 26892.62 105.05 0.00 0.00 4748.67 2075.31 11468.80 00:24:57.463 [2024-11-20T09:42:29.839Z] =================================================================================================================== 00:24:57.463 [2024-11-20T09:42:29.839Z] Total : 26892.62 105.05 0.00 0.00 4748.67 2075.31 11468.80 00:24:57.463 Received shutdown signal, test time was about 1.000000 seconds 00:24:57.463 00:24:57.463 Latency(us) 00:24:57.463 [2024-11-20T09:42:29.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.463 [2024-11-20T09:42:29.839Z] =================================================================================================================== 00:24:57.463 [2024-11-20T09:42:29.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.463 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.463 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.463 rmmod nvme_tcp 00:24:57.463 rmmod nvme_fabrics 00:24:57.723 rmmod nvme_keyring 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2134904 ']' 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2134904 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2134904 ']' 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2134904 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2134904 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2134904' 00:24:57.723 killing process with pid 2134904 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2134904 00:24:57.723 10:42:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2134904 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.723 10:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.263 00:25:00.263 real 0m13.943s 00:25:00.263 user 0m17.073s 00:25:00.263 sys 0m6.448s 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:00.263 ************************************ 00:25:00.263 END TEST nvmf_multicontroller 00:25:00.263 ************************************ 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.263 ************************************ 00:25:00.263 START TEST nvmf_aer 00:25:00.263 ************************************ 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:00.263 * Looking for test storage... 00:25:00.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.263 --rc genhtml_branch_coverage=1 00:25:00.263 --rc genhtml_function_coverage=1 00:25:00.263 --rc genhtml_legend=1 00:25:00.263 --rc geninfo_all_blocks=1 00:25:00.263 --rc geninfo_unexecuted_blocks=1 00:25:00.263 00:25:00.263 ' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.263 --rc genhtml_branch_coverage=1 00:25:00.263 --rc genhtml_function_coverage=1 00:25:00.263 --rc genhtml_legend=1 00:25:00.263 --rc geninfo_all_blocks=1 00:25:00.263 --rc geninfo_unexecuted_blocks=1 00:25:00.263 00:25:00.263 ' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.263 --rc genhtml_branch_coverage=1 00:25:00.263 --rc genhtml_function_coverage=1 00:25:00.263 --rc genhtml_legend=1 00:25:00.263 --rc geninfo_all_blocks=1 00:25:00.263 --rc geninfo_unexecuted_blocks=1 00:25:00.263 00:25:00.263 ' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.263 --rc genhtml_branch_coverage=1 00:25:00.263 --rc genhtml_function_coverage=1 00:25:00.263 --rc genhtml_legend=1 00:25:00.263 --rc geninfo_all_blocks=1 00:25:00.263 --rc geninfo_unexecuted_blocks=1 00:25:00.263 00:25:00.263 ' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.263 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.264 10:42:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.400 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.400 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.400 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:25:08.401 00:25:08.401 --- 10.0.0.2 ping statistics --- 00:25:08.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.401 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:25:08.401 00:25:08.401 --- 10.0.0.1 ping statistics --- 00:25:08.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.401 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2139735 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2139735 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2139735 ']' 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.401 10:42:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 [2024-11-20 10:42:39.797073] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:08.401 [2024-11-20 10:42:39.797124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.401 [2024-11-20 10:42:39.891672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.401 [2024-11-20 10:42:39.928194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.401 [2024-11-20 10:42:39.928228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.401 [2024-11-20 10:42:39.928236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.401 [2024-11-20 10:42:39.928243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.401 [2024-11-20 10:42:39.928249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.401 [2024-11-20 10:42:39.929936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.401 [2024-11-20 10:42:39.930087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.401 [2024-11-20 10:42:39.930216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.401 [2024-11-20 10:42:39.930387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 [2024-11-20 10:42:40.640045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 Malloc0 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 [2024-11-20 10:42:40.711477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.401 [ 00:25:08.401 { 00:25:08.401 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:08.401 "subtype": "Discovery", 00:25:08.401 "listen_addresses": [], 00:25:08.401 "allow_any_host": true, 00:25:08.401 "hosts": [] 00:25:08.401 }, 00:25:08.401 { 00:25:08.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.401 "subtype": "NVMe", 00:25:08.401 "listen_addresses": [ 00:25:08.401 { 00:25:08.401 "trtype": "TCP", 00:25:08.401 "adrfam": "IPv4", 00:25:08.401 "traddr": "10.0.0.2", 00:25:08.401 "trsvcid": "4420" 00:25:08.401 } 00:25:08.401 ], 00:25:08.401 "allow_any_host": true, 00:25:08.401 "hosts": [], 00:25:08.401 "serial_number": "SPDK00000000000001", 00:25:08.401 "model_number": "SPDK bdev Controller", 00:25:08.401 "max_namespaces": 2, 00:25:08.401 "min_cntlid": 1, 00:25:08.401 "max_cntlid": 65519, 00:25:08.401 "namespaces": [ 00:25:08.401 { 00:25:08.401 "nsid": 1, 00:25:08.401 "bdev_name": "Malloc0", 00:25:08.401 "name": "Malloc0", 00:25:08.401 "nguid": "438579B9253545DF8F95B1C3F9155538", 00:25:08.401 "uuid": "438579b9-2535-45df-8f95-b1c3f9155538" 00:25:08.401 } 00:25:08.401 ] 00:25:08.401 } 00:25:08.401 ] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:08.401 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2139974 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:08.402 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.662 Malloc1 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.662 10:42:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.662 Asynchronous Event Request test 00:25:08.662 Attaching to 10.0.0.2 00:25:08.662 Attached to 10.0.0.2 00:25:08.662 Registering asynchronous event callbacks... 00:25:08.662 Starting namespace attribute notice tests for all controllers... 00:25:08.662 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:08.662 aer_cb - Changed Namespace 00:25:08.662 Cleaning up... 00:25:08.662 [ 00:25:08.662 { 00:25:08.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:08.662 "subtype": "Discovery", 00:25:08.662 "listen_addresses": [], 00:25:08.662 "allow_any_host": true, 00:25:08.662 "hosts": [] 00:25:08.662 }, 00:25:08.662 { 00:25:08.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.662 "subtype": "NVMe", 00:25:08.662 "listen_addresses": [ 00:25:08.662 { 00:25:08.662 "trtype": "TCP", 00:25:08.662 "adrfam": "IPv4", 00:25:08.662 "traddr": "10.0.0.2", 00:25:08.662 "trsvcid": "4420" 00:25:08.662 } 00:25:08.662 ], 00:25:08.662 "allow_any_host": true, 00:25:08.662 "hosts": [], 00:25:08.662 "serial_number": "SPDK00000000000001", 00:25:08.662 "model_number": "SPDK bdev Controller", 00:25:08.662 "max_namespaces": 2, 00:25:08.662 "min_cntlid": 1, 00:25:08.662 "max_cntlid": 65519, 00:25:08.662 "namespaces": [ 00:25:08.662 { 00:25:08.662 "nsid": 1, 00:25:08.662 "bdev_name": "Malloc0", 00:25:08.662 "name": "Malloc0", 00:25:08.662 "nguid": "438579B9253545DF8F95B1C3F9155538", 00:25:08.662 "uuid": "438579b9-2535-45df-8f95-b1c3f9155538" 00:25:08.662 }, 00:25:08.662 { 00:25:08.662 "nsid": 2, 00:25:08.662 "bdev_name": "Malloc1", 00:25:08.662 "name": "Malloc1", 00:25:08.662 "nguid": "A3C47D016D1949A18B2BD8B97AAB8053", 00:25:08.662 "uuid": "a3c47d01-6d19-49a1-8b2b-d8b97aab8053" 00:25:08.662 } 00:25:08.662 ] 00:25:08.662 } 00:25:08.662 ] 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2139974 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.662 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.922 rmmod nvme_tcp 00:25:08.922 rmmod nvme_fabrics 00:25:08.922 rmmod nvme_keyring 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2139735 ']' 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2139735 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2139735 ']' 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2139735 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139735 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139735' 00:25:08.922 killing process with pid 2139735 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2139735 00:25:08.922 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2139735 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.182 10:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.094 00:25:11.094 real 0m11.176s 00:25:11.094 user 0m7.822s 00:25:11.094 sys 0m5.878s 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:11.094 ************************************ 00:25:11.094 END TEST nvmf_aer 00:25:11.094 ************************************ 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.094 10:42:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.355 ************************************ 00:25:11.355 START TEST nvmf_async_init 00:25:11.355 ************************************ 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:11.355 * Looking for test storage... 00:25:11.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.355 --rc genhtml_branch_coverage=1 00:25:11.355 --rc genhtml_function_coverage=1 00:25:11.355 --rc genhtml_legend=1 00:25:11.355 --rc geninfo_all_blocks=1 00:25:11.355 --rc geninfo_unexecuted_blocks=1 00:25:11.355 00:25:11.355 ' 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.355 --rc genhtml_branch_coverage=1 00:25:11.355 --rc genhtml_function_coverage=1 00:25:11.355 --rc genhtml_legend=1 00:25:11.355 --rc geninfo_all_blocks=1 00:25:11.355 --rc geninfo_unexecuted_blocks=1 00:25:11.355 00:25:11.355 ' 00:25:11.355 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.356 --rc genhtml_branch_coverage=1 00:25:11.356 --rc genhtml_function_coverage=1 00:25:11.356 --rc genhtml_legend=1 00:25:11.356 --rc geninfo_all_blocks=1 00:25:11.356 --rc geninfo_unexecuted_blocks=1 00:25:11.356 00:25:11.356 ' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.356 --rc genhtml_branch_coverage=1 00:25:11.356 --rc genhtml_function_coverage=1 00:25:11.356 --rc genhtml_legend=1 00:25:11.356 --rc geninfo_all_blocks=1 00:25:11.356 --rc geninfo_unexecuted_blocks=1 00:25:11.356 00:25:11.356 ' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.356 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b2fc17bc4b004ac3b807fc58609627b9 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.618 10:42:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:19.759 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:19.759 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:19.759 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:19.759 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.759 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.760 10:42:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:25:19.760 00:25:19.760 --- 10.0.0.2 ping statistics --- 00:25:19.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.760 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:25:19.760 00:25:19.760 --- 10.0.0.1 ping statistics --- 00:25:19.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.760 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2144310 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2144310 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2144310 ']' 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.760 10:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:19.760 [2024-11-20 10:42:51.323424] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:19.760 [2024-11-20 10:42:51.323490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.760 [2024-11-20 10:42:51.424219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.760 [2024-11-20 10:42:51.474760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.760 [2024-11-20 10:42:51.474814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.760 [2024-11-20 10:42:51.474823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.760 [2024-11-20 10:42:51.474830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.760 [2024-11-20 10:42:51.474836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.760 [2024-11-20 10:42:51.475614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 [2024-11-20 10:42:52.201244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 null0 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b2fc17bc4b004ac3b807fc58609627b9 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.021 [2024-11-20 10:42:52.261657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.021 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.282 nvme0n1 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.282 [ 00:25:20.282 { 00:25:20.282 "name": "nvme0n1", 00:25:20.282 "aliases": [ 00:25:20.282 "b2fc17bc-4b00-4ac3-b807-fc58609627b9" 00:25:20.282 ], 00:25:20.282 "product_name": "NVMe disk", 00:25:20.282 "block_size": 512, 00:25:20.282 "num_blocks": 2097152, 00:25:20.282 "uuid": "b2fc17bc-4b00-4ac3-b807-fc58609627b9", 00:25:20.282 "numa_id": 0, 00:25:20.282 "assigned_rate_limits": { 00:25:20.282 "rw_ios_per_sec": 0, 00:25:20.282 "rw_mbytes_per_sec": 0, 00:25:20.282 "r_mbytes_per_sec": 0, 00:25:20.282 "w_mbytes_per_sec": 0 00:25:20.282 }, 00:25:20.282 "claimed": false, 00:25:20.282 "zoned": false, 00:25:20.282 "supported_io_types": { 00:25:20.282 "read": true, 00:25:20.282 "write": true, 00:25:20.282 "unmap": false, 00:25:20.282 "flush": true, 00:25:20.282 "reset": true, 00:25:20.282 "nvme_admin": true, 00:25:20.282 "nvme_io": true, 00:25:20.282 "nvme_io_md": false, 00:25:20.282 "write_zeroes": true, 00:25:20.282 "zcopy": false, 00:25:20.282 "get_zone_info": false, 00:25:20.282 "zone_management": false, 00:25:20.282 "zone_append": false, 00:25:20.282 "compare": true, 00:25:20.282 "compare_and_write": true, 00:25:20.282 "abort": true, 00:25:20.282 "seek_hole": false, 00:25:20.282 "seek_data": false, 00:25:20.282 "copy": true, 00:25:20.282 "nvme_iov_md": false 00:25:20.282 }, 00:25:20.282 "memory_domains": [ 00:25:20.282 { 00:25:20.282 "dma_device_id": "system", 00:25:20.282 "dma_device_type": 1 00:25:20.282 } 00:25:20.282 ], 00:25:20.282 "driver_specific": { 00:25:20.282 "nvme": [ 00:25:20.282 { 00:25:20.282 "trid": { 00:25:20.282 "trtype": "TCP", 00:25:20.282 "adrfam": "IPv4", 00:25:20.282 "traddr": "10.0.0.2", 00:25:20.282 "trsvcid": "4420", 00:25:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:20.282 }, 00:25:20.282 "ctrlr_data": { 00:25:20.282 "cntlid": 1, 00:25:20.282 "vendor_id": "0x8086", 00:25:20.282 "model_number": "SPDK bdev Controller", 00:25:20.282 "serial_number": "00000000000000000000", 00:25:20.282 "firmware_revision": "25.01", 00:25:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.282 "oacs": { 00:25:20.282 "security": 0, 00:25:20.282 "format": 0, 00:25:20.282 "firmware": 0, 00:25:20.282 "ns_manage": 0 00:25:20.282 }, 00:25:20.282 "multi_ctrlr": true, 00:25:20.282 "ana_reporting": false 00:25:20.282 }, 00:25:20.282 "vs": { 00:25:20.282 "nvme_version": "1.3" 00:25:20.282 }, 00:25:20.282 "ns_data": { 00:25:20.282 "id": 1, 00:25:20.282 "can_share": true 00:25:20.282 } 00:25:20.282 } 00:25:20.282 ], 00:25:20.282 "mp_policy": "active_passive" 00:25:20.282 } 00:25:20.282 } 00:25:20.282 ] 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.282 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.282 [2024-11-20 10:42:52.538185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:20.282 [2024-11-20 10:42:52.538273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a1ce0 (9): Bad file descriptor 00:25:20.543 [2024-11-20 10:42:52.672271] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.543 [ 00:25:20.543 { 00:25:20.543 "name": "nvme0n1", 00:25:20.543 "aliases": [ 00:25:20.543 "b2fc17bc-4b00-4ac3-b807-fc58609627b9" 00:25:20.543 ], 00:25:20.543 "product_name": "NVMe disk", 00:25:20.543 "block_size": 512, 00:25:20.543 "num_blocks": 2097152, 00:25:20.543 "uuid": "b2fc17bc-4b00-4ac3-b807-fc58609627b9", 00:25:20.543 "numa_id": 0, 00:25:20.543 "assigned_rate_limits": { 00:25:20.543 "rw_ios_per_sec": 0, 00:25:20.543 "rw_mbytes_per_sec": 0, 00:25:20.543 "r_mbytes_per_sec": 0, 00:25:20.543 "w_mbytes_per_sec": 0 00:25:20.543 }, 00:25:20.543 "claimed": false, 00:25:20.543 "zoned": false, 00:25:20.543 "supported_io_types": { 00:25:20.543 "read": true, 00:25:20.543 "write": true, 00:25:20.543 "unmap": false, 00:25:20.543 "flush": true, 00:25:20.543 "reset": true, 00:25:20.543 "nvme_admin": true, 00:25:20.543 "nvme_io": true, 00:25:20.543 "nvme_io_md": false, 00:25:20.543 "write_zeroes": true, 00:25:20.543 "zcopy": false, 00:25:20.543 "get_zone_info": false, 00:25:20.543 "zone_management": false, 00:25:20.543 "zone_append": false, 00:25:20.543 "compare": true, 00:25:20.543 "compare_and_write": true, 00:25:20.543 "abort": true, 00:25:20.543 "seek_hole": false, 00:25:20.543 "seek_data": false, 00:25:20.543 "copy": true, 00:25:20.543 "nvme_iov_md": false 00:25:20.543 }, 00:25:20.543 "memory_domains": [ 00:25:20.543 { 00:25:20.543 "dma_device_id": "system", 00:25:20.543 "dma_device_type": 1 00:25:20.543 } 00:25:20.543 ], 00:25:20.543 "driver_specific": { 00:25:20.543 "nvme": [ 00:25:20.543 { 00:25:20.543 "trid": { 00:25:20.543 "trtype": "TCP", 00:25:20.543 "adrfam": "IPv4", 00:25:20.543 "traddr": "10.0.0.2", 00:25:20.543 "trsvcid": "4420", 00:25:20.543 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:20.543 }, 00:25:20.543 "ctrlr_data": { 00:25:20.543 "cntlid": 2, 00:25:20.543 "vendor_id": "0x8086", 00:25:20.543 "model_number": "SPDK bdev Controller", 00:25:20.543 "serial_number": "00000000000000000000", 00:25:20.543 "firmware_revision": "25.01", 00:25:20.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.543 "oacs": { 00:25:20.543 "security": 0, 00:25:20.543 "format": 0, 00:25:20.543 "firmware": 0, 00:25:20.543 "ns_manage": 0 00:25:20.543 }, 00:25:20.543 "multi_ctrlr": true, 00:25:20.543 "ana_reporting": false 00:25:20.543 }, 00:25:20.543 "vs": { 00:25:20.543 "nvme_version": "1.3" 00:25:20.543 }, 00:25:20.543 "ns_data": { 00:25:20.543 "id": 1, 00:25:20.543 "can_share": true 00:25:20.543 } 00:25:20.543 } 00:25:20.543 ], 00:25:20.543 "mp_policy": "active_passive" 00:25:20.543 } 00:25:20.543 } 00:25:20.543 ] 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.543 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bHxACAxJiz 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bHxACAxJiz 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bHxACAxJiz 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 [2024-11-20 10:42:52.762883] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:20.544 [2024-11-20 10:42:52.763057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 [2024-11-20 10:42:52.786959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.544 nvme0n1 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 [ 00:25:20.544 { 00:25:20.544 "name": "nvme0n1", 00:25:20.544 "aliases": [ 00:25:20.544 "b2fc17bc-4b00-4ac3-b807-fc58609627b9" 00:25:20.544 ], 00:25:20.544 "product_name": "NVMe disk", 00:25:20.544 "block_size": 512, 00:25:20.544 "num_blocks": 2097152, 00:25:20.544 "uuid": "b2fc17bc-4b00-4ac3-b807-fc58609627b9", 00:25:20.544 "numa_id": 0, 00:25:20.544 "assigned_rate_limits": { 00:25:20.544 "rw_ios_per_sec": 0, 00:25:20.544 "rw_mbytes_per_sec": 0, 00:25:20.544 "r_mbytes_per_sec": 0, 00:25:20.544 "w_mbytes_per_sec": 0 00:25:20.544 }, 00:25:20.544 "claimed": false, 00:25:20.544 "zoned": false, 00:25:20.544 "supported_io_types": { 00:25:20.544 "read": true, 00:25:20.544 "write": true, 00:25:20.544 "unmap": false, 00:25:20.544 "flush": true, 00:25:20.544 "reset": true, 00:25:20.544 "nvme_admin": true, 00:25:20.544 "nvme_io": true, 00:25:20.544 "nvme_io_md": false, 00:25:20.544 "write_zeroes": true, 00:25:20.544 "zcopy": false, 00:25:20.544 "get_zone_info": false, 00:25:20.544 "zone_management": false, 00:25:20.544 "zone_append": false, 00:25:20.544 "compare": true, 00:25:20.544 "compare_and_write": true, 00:25:20.544 "abort": true, 00:25:20.544 "seek_hole": false, 00:25:20.544 "seek_data": false, 00:25:20.544 "copy": true, 00:25:20.544 "nvme_iov_md": false 00:25:20.544 }, 00:25:20.544 "memory_domains": [ 00:25:20.544 { 00:25:20.544 "dma_device_id": "system", 00:25:20.544 "dma_device_type": 1 00:25:20.544 } 00:25:20.544 ], 00:25:20.544 "driver_specific": { 00:25:20.544 "nvme": [ 00:25:20.544 { 00:25:20.544 "trid": { 00:25:20.544 "trtype": "TCP", 00:25:20.544 "adrfam": "IPv4", 00:25:20.544 "traddr": "10.0.0.2", 00:25:20.544 "trsvcid": "4421", 00:25:20.544 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:20.544 }, 00:25:20.544 "ctrlr_data": { 00:25:20.544 "cntlid": 3, 00:25:20.544 "vendor_id": "0x8086", 00:25:20.544 "model_number": "SPDK bdev Controller", 00:25:20.544 "serial_number": "00000000000000000000", 00:25:20.544 "firmware_revision": "25.01", 00:25:20.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.544 "oacs": { 00:25:20.544 "security": 0, 00:25:20.544 "format": 0, 00:25:20.544 "firmware": 0, 00:25:20.544 "ns_manage": 0 00:25:20.544 }, 00:25:20.544 "multi_ctrlr": true, 00:25:20.544 "ana_reporting": false 00:25:20.544 }, 00:25:20.544 "vs": { 00:25:20.544 "nvme_version": "1.3" 00:25:20.544 }, 00:25:20.544 "ns_data": { 00:25:20.544 "id": 1, 00:25:20.544 "can_share": true 00:25:20.544 } 00:25:20.544 } 00:25:20.544 ], 00:25:20.544 "mp_policy": "active_passive" 00:25:20.544 } 00:25:20.544 } 00:25:20.544 ] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bHxACAxJiz 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.544 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.805 rmmod nvme_tcp 00:25:20.805 rmmod nvme_fabrics 00:25:20.805 rmmod nvme_keyring 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2144310 ']' 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2144310 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2144310 ']' 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2144310 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.805 10:42:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144310 00:25:20.805 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.805 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.805 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144310' 00:25:20.805 killing process with pid 2144310 00:25:20.805 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2144310 00:25:20.805 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2144310 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.066 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.067 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.067 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.067 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.067 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.067 10:42:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.978 00:25:22.978 real 0m11.780s 00:25:22.978 user 0m4.296s 00:25:22.978 sys 0m6.070s 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:22.978 ************************************ 00:25:22.978 END TEST nvmf_async_init 00:25:22.978 ************************************ 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.978 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.238 ************************************ 00:25:23.238 START TEST dma 00:25:23.238 ************************************ 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:23.238 * Looking for test storage... 00:25:23.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.238 --rc genhtml_branch_coverage=1 00:25:23.238 --rc genhtml_function_coverage=1 00:25:23.238 --rc genhtml_legend=1 00:25:23.238 --rc geninfo_all_blocks=1 00:25:23.238 --rc geninfo_unexecuted_blocks=1 00:25:23.238 00:25:23.238 ' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.238 --rc genhtml_branch_coverage=1 00:25:23.238 --rc genhtml_function_coverage=1 00:25:23.238 --rc genhtml_legend=1 00:25:23.238 --rc geninfo_all_blocks=1 00:25:23.238 --rc geninfo_unexecuted_blocks=1 00:25:23.238 00:25:23.238 ' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.238 --rc genhtml_branch_coverage=1 00:25:23.238 --rc genhtml_function_coverage=1 00:25:23.238 --rc genhtml_legend=1 00:25:23.238 --rc geninfo_all_blocks=1 00:25:23.238 --rc geninfo_unexecuted_blocks=1 00:25:23.238 00:25:23.238 ' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.238 --rc genhtml_branch_coverage=1 00:25:23.238 --rc genhtml_function_coverage=1 00:25:23.238 --rc genhtml_legend=1 00:25:23.238 --rc geninfo_all_blocks=1 00:25:23.238 --rc geninfo_unexecuted_blocks=1 00:25:23.238 00:25:23.238 ' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.238 10:42:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:23.239 00:25:23.239 real 0m0.236s 00:25:23.239 user 0m0.133s 00:25:23.239 sys 0m0.117s 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.239 10:42:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:23.239 ************************************ 00:25:23.239 END TEST dma 00:25:23.239 ************************************ 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.499 ************************************ 00:25:23.499 START TEST nvmf_identify 00:25:23.499 ************************************ 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:23.499 * Looking for test storage... 00:25:23.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.499 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.785 --rc genhtml_branch_coverage=1 00:25:23.785 --rc genhtml_function_coverage=1 00:25:23.785 --rc genhtml_legend=1 00:25:23.785 --rc geninfo_all_blocks=1 00:25:23.785 --rc geninfo_unexecuted_blocks=1 00:25:23.785 00:25:23.785 ' 00:25:23.785 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.785 --rc genhtml_branch_coverage=1 00:25:23.785 --rc genhtml_function_coverage=1 00:25:23.785 --rc genhtml_legend=1 00:25:23.785 --rc geninfo_all_blocks=1 00:25:23.785 --rc geninfo_unexecuted_blocks=1 00:25:23.785 00:25:23.785 ' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.786 --rc genhtml_branch_coverage=1 00:25:23.786 --rc genhtml_function_coverage=1 00:25:23.786 --rc genhtml_legend=1 00:25:23.786 --rc geninfo_all_blocks=1 00:25:23.786 --rc geninfo_unexecuted_blocks=1 00:25:23.786 00:25:23.786 ' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.786 --rc genhtml_branch_coverage=1 00:25:23.786 --rc genhtml_function_coverage=1 00:25:23.786 --rc genhtml_legend=1 00:25:23.786 --rc geninfo_all_blocks=1 00:25:23.786 --rc geninfo_unexecuted_blocks=1 00:25:23.786 00:25:23.786 ' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.786 10:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:31.919 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.919 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.919 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.919 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:31.920 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:31.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:31.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:31.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:25:31.920 00:25:31.920 --- 10.0.0.2 ping statistics --- 00:25:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.920 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:31.920 00:25:31.920 --- 10.0.0.1 ping statistics --- 00:25:31.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.920 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.920 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2148906 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2148906 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2148906 ']' 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.921 10:43:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:31.921 [2024-11-20 10:43:03.512692] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:31.921 [2024-11-20 10:43:03.512760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.921 [2024-11-20 10:43:03.614630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.921 [2024-11-20 10:43:03.669643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.921 [2024-11-20 10:43:03.669697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.921 [2024-11-20 10:43:03.669706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.921 [2024-11-20 10:43:03.669714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.921 [2024-11-20 10:43:03.669720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.921 [2024-11-20 10:43:03.671784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.921 [2024-11-20 10:43:03.671944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.921 [2024-11-20 10:43:03.672106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.921 [2024-11-20 10:43:03.672109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 [2024-11-20 10:43:04.344135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 Malloc0 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 [2024-11-20 10:43:04.461348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.181 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:32.182 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.182 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.182 [ 00:25:32.182 { 00:25:32.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:32.182 "subtype": "Discovery", 00:25:32.182 "listen_addresses": [ 00:25:32.182 { 00:25:32.182 "trtype": "TCP", 00:25:32.182 "adrfam": "IPv4", 00:25:32.182 "traddr": "10.0.0.2", 00:25:32.182 "trsvcid": "4420" 00:25:32.182 } 00:25:32.182 ], 00:25:32.182 "allow_any_host": true, 00:25:32.182 "hosts": [] 00:25:32.182 }, 00:25:32.182 { 00:25:32.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.182 "subtype": "NVMe", 00:25:32.182 "listen_addresses": [ 00:25:32.182 { 00:25:32.182 "trtype": "TCP", 00:25:32.182 "adrfam": "IPv4", 00:25:32.182 "traddr": "10.0.0.2", 00:25:32.182 "trsvcid": "4420" 00:25:32.182 } 00:25:32.182 ], 00:25:32.182 "allow_any_host": true, 00:25:32.182 "hosts": [], 00:25:32.182 "serial_number": "SPDK00000000000001", 00:25:32.182 "model_number": "SPDK bdev Controller", 00:25:32.182 "max_namespaces": 32, 00:25:32.182 "min_cntlid": 1, 00:25:32.182 "max_cntlid": 65519, 00:25:32.182 "namespaces": [ 00:25:32.182 { 00:25:32.182 "nsid": 1, 00:25:32.182 "bdev_name": "Malloc0", 00:25:32.182 "name": "Malloc0", 00:25:32.182 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:32.182 "eui64": "ABCDEF0123456789", 00:25:32.182 "uuid": "1e896f07-8d29-491f-bcf6-4ac7d0a8dedc" 00:25:32.182 } 00:25:32.182 ] 00:25:32.182 } 00:25:32.182 ] 00:25:32.182 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.182 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:32.182 [2024-11-20 10:43:04.526212] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:32.182 [2024-11-20 10:43:04.526260] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149079 ] 00:25:32.445 [2024-11-20 10:43:04.583883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:32.445 [2024-11-20 10:43:04.583951] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:32.445 [2024-11-20 10:43:04.583957] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:32.445 [2024-11-20 10:43:04.583981] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:32.445 [2024-11-20 10:43:04.583996] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:32.445 [2024-11-20 10:43:04.584833] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:32.445 [2024-11-20 10:43:04.584879] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d75690 0 00:25:32.445 [2024-11-20 10:43:04.595178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:32.445 [2024-11-20 10:43:04.595196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:32.445 [2024-11-20 10:43:04.595202] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:32.445 [2024-11-20 10:43:04.595205] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:32.445 [2024-11-20 10:43:04.595252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.595259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.595264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.595287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:32.445 [2024-11-20 10:43:04.595312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.606176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.606191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.606195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.606213] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:32.445 [2024-11-20 10:43:04.606222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:32.445 [2024-11-20 10:43:04.606228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:32.445 [2024-11-20 10:43:04.606248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.606267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.445 [2024-11-20 10:43:04.606286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.606511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.606519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.606523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.606534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:32.445 [2024-11-20 10:43:04.606543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:32.445 [2024-11-20 10:43:04.606550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.606566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.445 [2024-11-20 10:43:04.606577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.606721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.606728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.606731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.606741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:32.445 [2024-11-20 10:43:04.606749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.606756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.606775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.445 [2024-11-20 10:43:04.606786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.606974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.606983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.606987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.606991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.606997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.607007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.607021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.445 [2024-11-20 10:43:04.607033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.607206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.607215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.607219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.607228] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:32.445 [2024-11-20 10:43:04.607234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.607242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.607355] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:32.445 [2024-11-20 10:43:04.607361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.607371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.445 [2024-11-20 10:43:04.607385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.445 [2024-11-20 10:43:04.607396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.445 [2024-11-20 10:43:04.607652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.445 [2024-11-20 10:43:04.607659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.445 [2024-11-20 10:43:04.607662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.445 [2024-11-20 10:43:04.607671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:32.445 [2024-11-20 10:43:04.607681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.445 [2024-11-20 10:43:04.607691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.607698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.446 [2024-11-20 10:43:04.607709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.446 [2024-11-20 10:43:04.607881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.446 [2024-11-20 10:43:04.607887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.446 [2024-11-20 10:43:04.607890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.607894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.446 [2024-11-20 10:43:04.607899] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:32.446 [2024-11-20 10:43:04.607904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.607912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:32.446 [2024-11-20 10:43:04.607921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.607931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.607935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.607942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.446 [2024-11-20 10:43:04.607953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.446 [2024-11-20 10:43:04.608157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.446 [2024-11-20 10:43:04.608170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.446 [2024-11-20 10:43:04.608174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608178] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d75690): datao=0, datal=4096, cccid=0 00:25:32.446 [2024-11-20 10:43:04.608183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7100) on tqpair(0x1d75690): expected_datao=0, payload_size=4096 00:25:32.446 [2024-11-20 10:43:04.608188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608202] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.446 [2024-11-20 10:43:04.608312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.446 [2024-11-20 10:43:04.608316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.446 [2024-11-20 10:43:04.608329] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:32.446 [2024-11-20 10:43:04.608334] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:32.446 [2024-11-20 10:43:04.608339] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:32.446 [2024-11-20 10:43:04.608349] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:32.446 [2024-11-20 10:43:04.608354] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:32.446 [2024-11-20 10:43:04.608362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.608374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.608381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:32.446 [2024-11-20 10:43:04.608407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.446 [2024-11-20 10:43:04.608603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.446 [2024-11-20 10:43:04.608610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.446 [2024-11-20 10:43:04.608613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.446 [2024-11-20 10:43:04.608626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.446 [2024-11-20 10:43:04.608646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.446 [2024-11-20 10:43:04.608672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.446 [2024-11-20 10:43:04.608691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.446 [2024-11-20 10:43:04.608709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.608718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:32.446 [2024-11-20 10:43:04.608725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.608728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.608735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.446 [2024-11-20 10:43:04.608747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7100, cid 0, qid 0 00:25:32.446 [2024-11-20 10:43:04.608756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7280, cid 1, qid 0 00:25:32.446 [2024-11-20 10:43:04.608761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7400, cid 2, qid 0 00:25:32.446 [2024-11-20 10:43:04.608765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.446 [2024-11-20 10:43:04.608770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7700, cid 4, qid 0 00:25:32.446 [2024-11-20 10:43:04.609063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.446 [2024-11-20 10:43:04.609069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.446 [2024-11-20 10:43:04.609073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.609077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7700) on tqpair=0x1d75690 00:25:32.446 [2024-11-20 10:43:04.609085] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:32.446 [2024-11-20 10:43:04.609091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:32.446 [2024-11-20 10:43:04.609101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.609105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d75690) 00:25:32.446 [2024-11-20 10:43:04.609112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.446 [2024-11-20 10:43:04.609122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7700, cid 4, qid 0 00:25:32.446 [2024-11-20 10:43:04.609269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.446 [2024-11-20 10:43:04.609276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.446 [2024-11-20 10:43:04.609280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.609283] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d75690): datao=0, datal=4096, cccid=4 00:25:32.446 [2024-11-20 10:43:04.609288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7700) on tqpair(0x1d75690): expected_datao=0, payload_size=4096 00:25:32.446 [2024-11-20 10:43:04.609292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.609306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.609310] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.650344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.446 [2024-11-20 10:43:04.650359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.446 [2024-11-20 10:43:04.650363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.446 [2024-11-20 10:43:04.650367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7700) on tqpair=0x1d75690 00:25:32.446 [2024-11-20 10:43:04.650386] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:32.447 [2024-11-20 10:43:04.650419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d75690) 00:25:32.447 [2024-11-20 10:43:04.650433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.447 [2024-11-20 10:43:04.650442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d75690) 00:25:32.447 [2024-11-20 10:43:04.650455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.447 [2024-11-20 10:43:04.650478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7700, cid 4, qid 0 00:25:32.447 [2024-11-20 10:43:04.650484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7880, cid 5, qid 0 00:25:32.447 [2024-11-20 10:43:04.650717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.447 [2024-11-20 10:43:04.650724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.447 [2024-11-20 10:43:04.650728] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650731] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d75690): datao=0, datal=1024, cccid=4 00:25:32.447 [2024-11-20 10:43:04.650736] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7700) on tqpair(0x1d75690): expected_datao=0, payload_size=1024 00:25:32.447 [2024-11-20 10:43:04.650740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650747] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.447 [2024-11-20 10:43:04.650763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.447 [2024-11-20 10:43:04.650767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.650771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7880) on tqpair=0x1d75690 00:25:32.447 [2024-11-20 10:43:04.693170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.447 [2024-11-20 10:43:04.693184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.447 [2024-11-20 10:43:04.693189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7700) on tqpair=0x1d75690 00:25:32.447 [2024-11-20 10:43:04.693209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d75690) 00:25:32.447 [2024-11-20 10:43:04.693223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.447 [2024-11-20 10:43:04.693242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7700, cid 4, qid 0 00:25:32.447 [2024-11-20 10:43:04.693500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.447 [2024-11-20 10:43:04.693507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.447 [2024-11-20 10:43:04.693512] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693516] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d75690): datao=0, datal=3072, cccid=4 00:25:32.447 [2024-11-20 10:43:04.693521] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7700) on tqpair(0x1d75690): expected_datao=0, payload_size=3072 00:25:32.447 [2024-11-20 10:43:04.693526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693545] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693550] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.447 [2024-11-20 10:43:04.693659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.447 [2024-11-20 10:43:04.693663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7700) on tqpair=0x1d75690 00:25:32.447 [2024-11-20 10:43:04.693677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d75690) 00:25:32.447 [2024-11-20 10:43:04.693687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.447 [2024-11-20 10:43:04.693708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7700, cid 4, qid 0 00:25:32.447 [2024-11-20 10:43:04.693970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.447 [2024-11-20 10:43:04.693977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.447 [2024-11-20 10:43:04.693981] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.693986] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d75690): datao=0, datal=8, cccid=4 00:25:32.447 [2024-11-20 10:43:04.693991] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7700) on tqpair(0x1d75690): expected_datao=0, payload_size=8 00:25:32.447 [2024-11-20 10:43:04.693996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.694003] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.694006] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.734356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.447 [2024-11-20 10:43:04.734368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.447 [2024-11-20 10:43:04.734371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.447 [2024-11-20 10:43:04.734375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7700) on tqpair=0x1d75690 00:25:32.447 ===================================================== 00:25:32.447 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:32.447 ===================================================== 00:25:32.447 Controller Capabilities/Features 00:25:32.447 ================================ 00:25:32.447 Vendor ID: 0000 00:25:32.447 Subsystem Vendor ID: 0000 00:25:32.447 Serial Number: .................... 00:25:32.447 Model Number: ........................................ 00:25:32.447 Firmware Version: 25.01 00:25:32.447 Recommended Arb Burst: 0 00:25:32.447 IEEE OUI Identifier: 00 00 00 00:25:32.447 Multi-path I/O 00:25:32.447 May have multiple subsystem ports: No 00:25:32.447 May have multiple controllers: No 00:25:32.447 Associated with SR-IOV VF: No 00:25:32.447 Max Data Transfer Size: 131072 00:25:32.447 Max Number of Namespaces: 0 00:25:32.447 Max Number of I/O Queues: 1024 00:25:32.447 NVMe Specification Version (VS): 1.3 00:25:32.447 NVMe Specification Version (Identify): 1.3 00:25:32.447 Maximum Queue Entries: 128 00:25:32.447 Contiguous Queues Required: Yes 00:25:32.447 Arbitration Mechanisms Supported 00:25:32.447 Weighted Round Robin: Not Supported 00:25:32.447 Vendor Specific: Not Supported 00:25:32.447 Reset Timeout: 15000 ms 00:25:32.447 Doorbell Stride: 4 bytes 00:25:32.447 NVM Subsystem Reset: Not Supported 00:25:32.447 Command Sets Supported 00:25:32.447 NVM Command Set: Supported 00:25:32.447 Boot Partition: Not Supported 00:25:32.447 Memory Page Size Minimum: 4096 bytes 00:25:32.447 Memory Page Size Maximum: 4096 bytes 00:25:32.447 Persistent Memory Region: Not Supported 00:25:32.447 Optional Asynchronous Events Supported 00:25:32.447 Namespace Attribute Notices: Not Supported 00:25:32.447 Firmware Activation Notices: Not Supported 00:25:32.447 ANA Change Notices: Not Supported 00:25:32.447 PLE Aggregate Log Change Notices: Not Supported 00:25:32.447 LBA Status Info Alert Notices: Not Supported 00:25:32.447 EGE Aggregate Log Change Notices: Not Supported 00:25:32.447 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.447 Zone Descriptor Change Notices: Not Supported 00:25:32.447 Discovery Log Change Notices: Supported 00:25:32.447 Controller Attributes 00:25:32.447 128-bit Host Identifier: Not Supported 00:25:32.447 Non-Operational Permissive Mode: Not Supported 00:25:32.447 NVM Sets: Not Supported 00:25:32.447 Read Recovery Levels: Not Supported 00:25:32.447 Endurance Groups: Not Supported 00:25:32.447 Predictable Latency Mode: Not Supported 00:25:32.447 Traffic Based Keep ALive: Not Supported 00:25:32.447 Namespace Granularity: Not Supported 00:25:32.447 SQ Associations: Not Supported 00:25:32.447 UUID List: Not Supported 00:25:32.447 Multi-Domain Subsystem: Not Supported 00:25:32.447 Fixed Capacity Management: Not Supported 00:25:32.447 Variable Capacity Management: Not Supported 00:25:32.447 Delete Endurance Group: Not Supported 00:25:32.447 Delete NVM Set: Not Supported 00:25:32.447 Extended LBA Formats Supported: Not Supported 00:25:32.447 Flexible Data Placement Supported: Not Supported 00:25:32.447 00:25:32.447 Controller Memory Buffer Support 00:25:32.447 ================================ 00:25:32.447 Supported: No 00:25:32.447 00:25:32.447 Persistent Memory Region Support 00:25:32.447 ================================ 00:25:32.447 Supported: No 00:25:32.447 00:25:32.447 Admin Command Set Attributes 00:25:32.447 ============================ 00:25:32.447 Security Send/Receive: Not Supported 00:25:32.447 Format NVM: Not Supported 00:25:32.447 Firmware Activate/Download: Not Supported 00:25:32.447 Namespace Management: Not Supported 00:25:32.448 Device Self-Test: Not Supported 00:25:32.448 Directives: Not Supported 00:25:32.448 NVMe-MI: Not Supported 00:25:32.448 Virtualization Management: Not Supported 00:25:32.448 Doorbell Buffer Config: Not Supported 00:25:32.448 Get LBA Status Capability: Not Supported 00:25:32.448 Command & Feature Lockdown Capability: Not Supported 00:25:32.448 Abort Command Limit: 1 00:25:32.448 Async Event Request Limit: 4 00:25:32.448 Number of Firmware Slots: N/A 00:25:32.448 Firmware Slot 1 Read-Only: N/A 00:25:32.448 Firmware Activation Without Reset: N/A 00:25:32.448 Multiple Update Detection Support: N/A 00:25:32.448 Firmware Update Granularity: No Information Provided 00:25:32.448 Per-Namespace SMART Log: No 00:25:32.448 Asymmetric Namespace Access Log Page: Not Supported 00:25:32.448 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:32.448 Command Effects Log Page: Not Supported 00:25:32.448 Get Log Page Extended Data: Supported 00:25:32.448 Telemetry Log Pages: Not Supported 00:25:32.448 Persistent Event Log Pages: Not Supported 00:25:32.448 Supported Log Pages Log Page: May Support 00:25:32.448 Commands Supported & Effects Log Page: Not Supported 00:25:32.448 Feature Identifiers & Effects Log Page:May Support 00:25:32.448 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.448 Data Area 4 for Telemetry Log: Not Supported 00:25:32.448 Error Log Page Entries Supported: 128 00:25:32.448 Keep Alive: Not Supported 00:25:32.448 00:25:32.448 NVM Command Set Attributes 00:25:32.448 ========================== 00:25:32.448 Submission Queue Entry Size 00:25:32.448 Max: 1 00:25:32.448 Min: 1 00:25:32.448 Completion Queue Entry Size 00:25:32.448 Max: 1 00:25:32.448 Min: 1 00:25:32.448 Number of Namespaces: 0 00:25:32.448 Compare Command: Not Supported 00:25:32.448 Write Uncorrectable Command: Not Supported 00:25:32.448 Dataset Management Command: Not Supported 00:25:32.448 Write Zeroes Command: Not Supported 00:25:32.448 Set Features Save Field: Not Supported 00:25:32.448 Reservations: Not Supported 00:25:32.448 Timestamp: Not Supported 00:25:32.448 Copy: Not Supported 00:25:32.448 Volatile Write Cache: Not Present 00:25:32.448 Atomic Write Unit (Normal): 1 00:25:32.448 Atomic Write Unit (PFail): 1 00:25:32.448 Atomic Compare & Write Unit: 1 00:25:32.448 Fused Compare & Write: Supported 00:25:32.448 Scatter-Gather List 00:25:32.448 SGL Command Set: Supported 00:25:32.448 SGL Keyed: Supported 00:25:32.448 SGL Bit Bucket Descriptor: Not Supported 00:25:32.448 SGL Metadata Pointer: Not Supported 00:25:32.448 Oversized SGL: Not Supported 00:25:32.448 SGL Metadata Address: Not Supported 00:25:32.448 SGL Offset: Supported 00:25:32.448 Transport SGL Data Block: Not Supported 00:25:32.448 Replay Protected Memory Block: Not Supported 00:25:32.448 00:25:32.448 Firmware Slot Information 00:25:32.448 ========================= 00:25:32.448 Active slot: 0 00:25:32.448 00:25:32.448 00:25:32.448 Error Log 00:25:32.448 ========= 00:25:32.448 00:25:32.448 Active Namespaces 00:25:32.448 ================= 00:25:32.448 Discovery Log Page 00:25:32.448 ================== 00:25:32.448 Generation Counter: 2 00:25:32.448 Number of Records: 2 00:25:32.448 Record Format: 0 00:25:32.448 00:25:32.448 Discovery Log Entry 0 00:25:32.448 ---------------------- 00:25:32.448 Transport Type: 3 (TCP) 00:25:32.448 Address Family: 1 (IPv4) 00:25:32.448 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:32.448 Entry Flags: 00:25:32.448 Duplicate Returned Information: 1 00:25:32.448 Explicit Persistent Connection Support for Discovery: 1 00:25:32.448 Transport Requirements: 00:25:32.448 Secure Channel: Not Required 00:25:32.448 Port ID: 0 (0x0000) 00:25:32.448 Controller ID: 65535 (0xffff) 00:25:32.448 Admin Max SQ Size: 128 00:25:32.448 Transport Service Identifier: 4420 00:25:32.448 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:32.448 Transport Address: 10.0.0.2 00:25:32.448 Discovery Log Entry 1 00:25:32.448 ---------------------- 00:25:32.448 Transport Type: 3 (TCP) 00:25:32.448 Address Family: 1 (IPv4) 00:25:32.448 Subsystem Type: 2 (NVM Subsystem) 00:25:32.448 Entry Flags: 00:25:32.448 Duplicate Returned Information: 0 00:25:32.448 Explicit Persistent Connection Support for Discovery: 0 00:25:32.448 Transport Requirements: 00:25:32.448 Secure Channel: Not Required 00:25:32.448 Port ID: 0 (0x0000) 00:25:32.448 Controller ID: 65535 (0xffff) 00:25:32.448 Admin Max SQ Size: 128 00:25:32.448 Transport Service Identifier: 4420 00:25:32.448 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:32.448 Transport Address: 10.0.0.2 [2024-11-20 10:43:04.734485] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:32.448 [2024-11-20 10:43:04.734498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7100) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.734505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-11-20 10:43:04.734511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7280) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.734516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-11-20 10:43:04.734521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7400) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.734526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-11-20 10:43:04.734531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.734535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-11-20 10:43:04.734548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.734552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.734556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.448 [2024-11-20 10:43:04.734564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.448 [2024-11-20 10:43:04.734580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.448 [2024-11-20 10:43:04.734751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.448 [2024-11-20 10:43:04.734758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.448 [2024-11-20 10:43:04.734761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.734765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.734773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.734777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.734780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.448 [2024-11-20 10:43:04.734787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.448 [2024-11-20 10:43:04.734802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.448 [2024-11-20 10:43:04.735031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.448 [2024-11-20 10:43:04.735037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.448 [2024-11-20 10:43:04.735041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.735045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.735050] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:32.448 [2024-11-20 10:43:04.735055] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:32.448 [2024-11-20 10:43:04.735065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.735068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.735072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.448 [2024-11-20 10:43:04.735079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.448 [2024-11-20 10:43:04.735089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.448 [2024-11-20 10:43:04.735242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.448 [2024-11-20 10:43:04.735249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.448 [2024-11-20 10:43:04.735252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.735256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.448 [2024-11-20 10:43:04.735267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.448 [2024-11-20 10:43:04.735271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.449 [2024-11-20 10:43:04.735281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.449 [2024-11-20 10:43:04.735291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.449 [2024-11-20 10:43:04.735463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.449 [2024-11-20 10:43:04.735469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.449 [2024-11-20 10:43:04.735473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.449 [2024-11-20 10:43:04.735487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.449 [2024-11-20 10:43:04.735501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.449 [2024-11-20 10:43:04.735512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.449 [2024-11-20 10:43:04.735699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.449 [2024-11-20 10:43:04.735705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.449 [2024-11-20 10:43:04.735708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.449 [2024-11-20 10:43:04.735722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.449 [2024-11-20 10:43:04.735739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.449 [2024-11-20 10:43:04.735750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.449 [2024-11-20 10:43:04.735930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.449 [2024-11-20 10:43:04.735936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.449 [2024-11-20 10:43:04.735940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.449 [2024-11-20 10:43:04.735954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.735961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.449 [2024-11-20 10:43:04.735968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.449 [2024-11-20 10:43:04.735979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.449 [2024-11-20 10:43:04.740168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.449 [2024-11-20 10:43:04.740176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.449 [2024-11-20 10:43:04.740180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.740184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.449 [2024-11-20 10:43:04.740195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.740199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.740202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d75690) 00:25:32.449 [2024-11-20 10:43:04.740209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.449 [2024-11-20 10:43:04.740221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7580, cid 3, qid 0 00:25:32.449 [2024-11-20 10:43:04.740403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.449 [2024-11-20 10:43:04.740409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.449 [2024-11-20 10:43:04.740412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.449 [2024-11-20 10:43:04.740416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd7580) on tqpair=0x1d75690 00:25:32.449 [2024-11-20 10:43:04.740424] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:32.449 00:25:32.449 10:43:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:32.449 [2024-11-20 10:43:04.792034] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:32.449 [2024-11-20 10:43:04.792112] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149092 ] 00:25:32.713 [2024-11-20 10:43:04.851739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:32.713 [2024-11-20 10:43:04.851805] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:32.713 [2024-11-20 10:43:04.851816] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:32.713 [2024-11-20 10:43:04.851834] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:32.713 [2024-11-20 10:43:04.851847] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:32.714 [2024-11-20 10:43:04.852557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:32.714 [2024-11-20 10:43:04.852601] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x248e690 0 00:25:32.714 [2024-11-20 10:43:04.863178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:32.714 [2024-11-20 10:43:04.863194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:32.714 [2024-11-20 10:43:04.863199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:32.714 [2024-11-20 10:43:04.863202] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:32.714 [2024-11-20 10:43:04.863238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.863244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.863248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.863262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:32.714 [2024-11-20 10:43:04.863286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.871176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.871186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.871190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.871204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:32.714 [2024-11-20 10:43:04.871212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:32.714 [2024-11-20 10:43:04.871217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:32.714 [2024-11-20 10:43:04.871231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.871247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.871263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.871452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.871460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.871463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.871472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:32.714 [2024-11-20 10:43:04.871480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:32.714 [2024-11-20 10:43:04.871488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.871506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.871518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.871732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.871739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.871743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.871752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:32.714 [2024-11-20 10:43:04.871760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.871767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.871775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.871782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.871792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.871998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.872004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.872008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.872017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.872027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.872042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.872052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.872240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.872247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.872251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.872259] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:32.714 [2024-11-20 10:43:04.872265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.872272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.872381] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:32.714 [2024-11-20 10:43:04.872387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.872395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.872412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.872423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.872615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.872622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.872625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.872634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:32.714 [2024-11-20 10:43:04.872644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.872658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.872668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.872852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.714 [2024-11-20 10:43:04.872859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.714 [2024-11-20 10:43:04.872862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.714 [2024-11-20 10:43:04.872871] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:32.714 [2024-11-20 10:43:04.872875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:32.714 [2024-11-20 10:43:04.872883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:32.714 [2024-11-20 10:43:04.872898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:32.714 [2024-11-20 10:43:04.872907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.872912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.714 [2024-11-20 10:43:04.872919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.714 [2024-11-20 10:43:04.872929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.714 [2024-11-20 10:43:04.873155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.714 [2024-11-20 10:43:04.873168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.714 [2024-11-20 10:43:04.873171] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.714 [2024-11-20 10:43:04.873176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=4096, cccid=0 00:25:32.714 [2024-11-20 10:43:04.873180] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0100) on tqpair(0x248e690): expected_datao=0, payload_size=4096 00:25:32.714 [2024-11-20 10:43:04.873185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873193] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873197] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.715 [2024-11-20 10:43:04.873355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.715 [2024-11-20 10:43:04.873359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.715 [2024-11-20 10:43:04.873371] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:32.715 [2024-11-20 10:43:04.873376] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:32.715 [2024-11-20 10:43:04.873380] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:32.715 [2024-11-20 10:43:04.873387] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:32.715 [2024-11-20 10:43:04.873392] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:32.715 [2024-11-20 10:43:04.873397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.873408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.873414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:32.715 [2024-11-20 10:43:04.873441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.715 [2024-11-20 10:43:04.873654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.715 [2024-11-20 10:43:04.873661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.715 [2024-11-20 10:43:04.873664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.715 [2024-11-20 10:43:04.873675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.715 [2024-11-20 10:43:04.873695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.715 [2024-11-20 10:43:04.873714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.715 [2024-11-20 10:43:04.873733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.715 [2024-11-20 10:43:04.873753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.873761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.873768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.873771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.873778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.715 [2024-11-20 10:43:04.873791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0100, cid 0, qid 0 00:25:32.715 [2024-11-20 10:43:04.873796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0280, cid 1, qid 0 00:25:32.715 [2024-11-20 10:43:04.873801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0400, cid 2, qid 0 00:25:32.715 [2024-11-20 10:43:04.873806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.715 [2024-11-20 10:43:04.873811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.715 [2024-11-20 10:43:04.874063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.715 [2024-11-20 10:43:04.874070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.715 [2024-11-20 10:43:04.874074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.715 [2024-11-20 10:43:04.874085] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:32.715 [2024-11-20 10:43:04.874090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.874099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.874107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.874113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.874127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:32.715 [2024-11-20 10:43:04.874138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.715 [2024-11-20 10:43:04.874324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.715 [2024-11-20 10:43:04.874331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.715 [2024-11-20 10:43:04.874334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.715 [2024-11-20 10:43:04.874405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.874415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.874425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.874437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.715 [2024-11-20 10:43:04.874447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.715 [2024-11-20 10:43:04.874677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.715 [2024-11-20 10:43:04.874684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.715 [2024-11-20 10:43:04.874687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=4096, cccid=4 00:25:32.715 [2024-11-20 10:43:04.874696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0700) on tqpair(0x248e690): expected_datao=0, payload_size=4096 00:25:32.715 [2024-11-20 10:43:04.874700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.874719] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.919177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.715 [2024-11-20 10:43:04.919189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.715 [2024-11-20 10:43:04.919192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.919197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.715 [2024-11-20 10:43:04.919209] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:32.715 [2024-11-20 10:43:04.919222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.919233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:32.715 [2024-11-20 10:43:04.919240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.919244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.715 [2024-11-20 10:43:04.919251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.715 [2024-11-20 10:43:04.919265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.715 [2024-11-20 10:43:04.919477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.715 [2024-11-20 10:43:04.919483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.715 [2024-11-20 10:43:04.919487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.919491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=4096, cccid=4 00:25:32.715 [2024-11-20 10:43:04.919495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0700) on tqpair(0x248e690): expected_datao=0, payload_size=4096 00:25:32.715 [2024-11-20 10:43:04.919500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.715 [2024-11-20 10:43:04.919529] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.919533] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:04.960356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:04.960359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:04.960386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:04.960396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:04.960404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:04.960415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:04.960427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.716 [2024-11-20 10:43:04.960613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.716 [2024-11-20 10:43:04.960620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.716 [2024-11-20 10:43:04.960623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960627] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=4096, cccid=4 00:25:32.716 [2024-11-20 10:43:04.960632] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0700) on tqpair(0x248e690): expected_datao=0, payload_size=4096 00:25:32.716 [2024-11-20 10:43:04.960636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:04.960670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.001355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.001359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.001372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001416] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:32.716 [2024-11-20 10:43:05.001421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:32.716 [2024-11-20 10:43:05.001427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:32.716 [2024-11-20 10:43:05.001444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.001455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.001463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.001479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.716 [2024-11-20 10:43:05.001495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.716 [2024-11-20 10:43:05.001500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0880, cid 5, qid 0 00:25:32.716 [2024-11-20 10:43:05.001626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.001633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.001636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.001647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.001653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.001656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0880) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.001670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.001681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.001691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0880, cid 5, qid 0 00:25:32.716 [2024-11-20 10:43:05.001894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.001901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.001904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0880) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.001918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.001922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.001928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.001938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0880, cid 5, qid 0 00:25:32.716 [2024-11-20 10:43:05.002141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.002147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.002151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0880) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.002171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.002182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.002194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0880, cid 5, qid 0 00:25:32.716 [2024-11-20 10:43:05.002400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.716 [2024-11-20 10:43:05.002407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.716 [2024-11-20 10:43:05.002410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0880) on tqpair=0x248e690 00:25:32.716 [2024-11-20 10:43:05.002434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.002445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.002453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.002463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.002470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.002480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.002488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x248e690) 00:25:32.716 [2024-11-20 10:43:05.002498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.716 [2024-11-20 10:43:05.002509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0880, cid 5, qid 0 00:25:32.716 [2024-11-20 10:43:05.002514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0700, cid 4, qid 0 00:25:32.716 [2024-11-20 10:43:05.002519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0a00, cid 6, qid 0 00:25:32.716 [2024-11-20 10:43:05.002523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0b80, cid 7, qid 0 00:25:32.716 [2024-11-20 10:43:05.002830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.716 [2024-11-20 10:43:05.002837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.716 [2024-11-20 10:43:05.002841] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=8192, cccid=5 00:25:32.716 [2024-11-20 10:43:05.002849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0880) on tqpair(0x248e690): expected_datao=0, payload_size=8192 00:25:32.716 [2024-11-20 10:43:05.002854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002924] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.716 [2024-11-20 10:43:05.002928] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.717 [2024-11-20 10:43:05.002940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.717 [2024-11-20 10:43:05.002943] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002947] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=512, cccid=4 00:25:32.717 [2024-11-20 10:43:05.002951] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0700) on tqpair(0x248e690): expected_datao=0, payload_size=512 00:25:32.717 [2024-11-20 10:43:05.002956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002962] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.717 [2024-11-20 10:43:05.002982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.717 [2024-11-20 10:43:05.002986] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.002989] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=512, cccid=6 00:25:32.717 [2024-11-20 10:43:05.002994] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0a00) on tqpair(0x248e690): expected_datao=0, payload_size=512 00:25:32.717 [2024-11-20 10:43:05.002998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003008] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:32.717 [2024-11-20 10:43:05.003020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:32.717 [2024-11-20 10:43:05.003023] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003027] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x248e690): datao=0, datal=4096, cccid=7 00:25:32.717 [2024-11-20 10:43:05.003031] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24f0b80) on tqpair(0x248e690): expected_datao=0, payload_size=4096 00:25:32.717 [2024-11-20 10:43:05.003035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003053] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.003057] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.047177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.717 [2024-11-20 10:43:05.047188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.717 [2024-11-20 10:43:05.047191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.047196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0880) on tqpair=0x248e690 00:25:32.717 [2024-11-20 10:43:05.047211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.717 [2024-11-20 10:43:05.047217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.717 [2024-11-20 10:43:05.047221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.047224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0700) on tqpair=0x248e690 00:25:32.717 [2024-11-20 10:43:05.047236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.717 [2024-11-20 10:43:05.047242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.717 [2024-11-20 10:43:05.047245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.047249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0a00) on tqpair=0x248e690 00:25:32.717 [2024-11-20 10:43:05.047256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.717 [2024-11-20 10:43:05.047262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.717 [2024-11-20 10:43:05.047266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.717 [2024-11-20 10:43:05.047269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0b80) on tqpair=0x248e690 00:25:32.717 ===================================================== 00:25:32.717 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.717 ===================================================== 00:25:32.717 Controller Capabilities/Features 00:25:32.717 ================================ 00:25:32.717 Vendor ID: 8086 00:25:32.717 Subsystem Vendor ID: 8086 00:25:32.717 Serial Number: SPDK00000000000001 00:25:32.717 Model Number: SPDK bdev Controller 00:25:32.717 Firmware Version: 25.01 00:25:32.717 Recommended Arb Burst: 6 00:25:32.717 IEEE OUI Identifier: e4 d2 5c 00:25:32.717 Multi-path I/O 00:25:32.717 May have multiple subsystem ports: Yes 00:25:32.717 May have multiple controllers: Yes 00:25:32.717 Associated with SR-IOV VF: No 00:25:32.717 Max Data Transfer Size: 131072 00:25:32.717 Max Number of Namespaces: 32 00:25:32.717 Max Number of I/O Queues: 127 00:25:32.717 NVMe Specification Version (VS): 1.3 00:25:32.717 NVMe Specification Version (Identify): 1.3 00:25:32.717 Maximum Queue Entries: 128 00:25:32.717 Contiguous Queues Required: Yes 00:25:32.717 Arbitration Mechanisms Supported 00:25:32.717 Weighted Round Robin: Not Supported 00:25:32.717 Vendor Specific: Not Supported 00:25:32.717 Reset Timeout: 15000 ms 00:25:32.717 Doorbell Stride: 4 bytes 00:25:32.717 NVM Subsystem Reset: Not Supported 00:25:32.717 Command Sets Supported 00:25:32.717 NVM Command Set: Supported 00:25:32.717 Boot Partition: Not Supported 00:25:32.717 Memory Page Size Minimum: 4096 bytes 00:25:32.717 Memory Page Size Maximum: 4096 bytes 00:25:32.717 Persistent Memory Region: Not Supported 00:25:32.717 Optional Asynchronous Events Supported 00:25:32.717 Namespace Attribute Notices: Supported 00:25:32.717 Firmware Activation Notices: Not Supported 00:25:32.717 ANA Change Notices: Not Supported 00:25:32.717 PLE Aggregate Log Change Notices: Not Supported 00:25:32.717 LBA Status Info Alert Notices: Not Supported 00:25:32.717 EGE Aggregate Log Change Notices: Not Supported 00:25:32.717 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.717 Zone Descriptor Change Notices: Not Supported 00:25:32.717 Discovery Log Change Notices: Not Supported 00:25:32.717 Controller Attributes 00:25:32.717 128-bit Host Identifier: Supported 00:25:32.717 Non-Operational Permissive Mode: Not Supported 00:25:32.717 NVM Sets: Not Supported 00:25:32.717 Read Recovery Levels: Not Supported 00:25:32.717 Endurance Groups: Not Supported 00:25:32.717 Predictable Latency Mode: Not Supported 00:25:32.717 Traffic Based Keep ALive: Not Supported 00:25:32.717 Namespace Granularity: Not Supported 00:25:32.717 SQ Associations: Not Supported 00:25:32.717 UUID List: Not Supported 00:25:32.717 Multi-Domain Subsystem: Not Supported 00:25:32.717 Fixed Capacity Management: Not Supported 00:25:32.717 Variable Capacity Management: Not Supported 00:25:32.717 Delete Endurance Group: Not Supported 00:25:32.717 Delete NVM Set: Not Supported 00:25:32.717 Extended LBA Formats Supported: Not Supported 00:25:32.717 Flexible Data Placement Supported: Not Supported 00:25:32.717 00:25:32.717 Controller Memory Buffer Support 00:25:32.717 ================================ 00:25:32.717 Supported: No 00:25:32.717 00:25:32.717 Persistent Memory Region Support 00:25:32.717 ================================ 00:25:32.717 Supported: No 00:25:32.717 00:25:32.717 Admin Command Set Attributes 00:25:32.717 ============================ 00:25:32.717 Security Send/Receive: Not Supported 00:25:32.717 Format NVM: Not Supported 00:25:32.717 Firmware Activate/Download: Not Supported 00:25:32.717 Namespace Management: Not Supported 00:25:32.717 Device Self-Test: Not Supported 00:25:32.717 Directives: Not Supported 00:25:32.717 NVMe-MI: Not Supported 00:25:32.717 Virtualization Management: Not Supported 00:25:32.717 Doorbell Buffer Config: Not Supported 00:25:32.717 Get LBA Status Capability: Not Supported 00:25:32.717 Command & Feature Lockdown Capability: Not Supported 00:25:32.717 Abort Command Limit: 4 00:25:32.717 Async Event Request Limit: 4 00:25:32.717 Number of Firmware Slots: N/A 00:25:32.717 Firmware Slot 1 Read-Only: N/A 00:25:32.717 Firmware Activation Without Reset: N/A 00:25:32.717 Multiple Update Detection Support: N/A 00:25:32.717 Firmware Update Granularity: No Information Provided 00:25:32.717 Per-Namespace SMART Log: No 00:25:32.717 Asymmetric Namespace Access Log Page: Not Supported 00:25:32.717 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:32.717 Command Effects Log Page: Supported 00:25:32.717 Get Log Page Extended Data: Supported 00:25:32.717 Telemetry Log Pages: Not Supported 00:25:32.717 Persistent Event Log Pages: Not Supported 00:25:32.717 Supported Log Pages Log Page: May Support 00:25:32.717 Commands Supported & Effects Log Page: Not Supported 00:25:32.717 Feature Identifiers & Effects Log Page:May Support 00:25:32.717 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.717 Data Area 4 for Telemetry Log: Not Supported 00:25:32.717 Error Log Page Entries Supported: 128 00:25:32.717 Keep Alive: Supported 00:25:32.717 Keep Alive Granularity: 10000 ms 00:25:32.717 00:25:32.717 NVM Command Set Attributes 00:25:32.717 ========================== 00:25:32.717 Submission Queue Entry Size 00:25:32.717 Max: 64 00:25:32.717 Min: 64 00:25:32.717 Completion Queue Entry Size 00:25:32.717 Max: 16 00:25:32.717 Min: 16 00:25:32.717 Number of Namespaces: 32 00:25:32.717 Compare Command: Supported 00:25:32.717 Write Uncorrectable Command: Not Supported 00:25:32.717 Dataset Management Command: Supported 00:25:32.718 Write Zeroes Command: Supported 00:25:32.718 Set Features Save Field: Not Supported 00:25:32.718 Reservations: Supported 00:25:32.718 Timestamp: Not Supported 00:25:32.718 Copy: Supported 00:25:32.718 Volatile Write Cache: Present 00:25:32.718 Atomic Write Unit (Normal): 1 00:25:32.718 Atomic Write Unit (PFail): 1 00:25:32.718 Atomic Compare & Write Unit: 1 00:25:32.718 Fused Compare & Write: Supported 00:25:32.718 Scatter-Gather List 00:25:32.718 SGL Command Set: Supported 00:25:32.718 SGL Keyed: Supported 00:25:32.718 SGL Bit Bucket Descriptor: Not Supported 00:25:32.718 SGL Metadata Pointer: Not Supported 00:25:32.718 Oversized SGL: Not Supported 00:25:32.718 SGL Metadata Address: Not Supported 00:25:32.718 SGL Offset: Supported 00:25:32.718 Transport SGL Data Block: Not Supported 00:25:32.718 Replay Protected Memory Block: Not Supported 00:25:32.718 00:25:32.718 Firmware Slot Information 00:25:32.718 ========================= 00:25:32.718 Active slot: 1 00:25:32.718 Slot 1 Firmware Revision: 25.01 00:25:32.718 00:25:32.718 00:25:32.718 Commands Supported and Effects 00:25:32.718 ============================== 00:25:32.718 Admin Commands 00:25:32.718 -------------- 00:25:32.718 Get Log Page (02h): Supported 00:25:32.718 Identify (06h): Supported 00:25:32.718 Abort (08h): Supported 00:25:32.718 Set Features (09h): Supported 00:25:32.718 Get Features (0Ah): Supported 00:25:32.718 Asynchronous Event Request (0Ch): Supported 00:25:32.718 Keep Alive (18h): Supported 00:25:32.718 I/O Commands 00:25:32.718 ------------ 00:25:32.718 Flush (00h): Supported LBA-Change 00:25:32.718 Write (01h): Supported LBA-Change 00:25:32.718 Read (02h): Supported 00:25:32.718 Compare (05h): Supported 00:25:32.718 Write Zeroes (08h): Supported LBA-Change 00:25:32.718 Dataset Management (09h): Supported LBA-Change 00:25:32.718 Copy (19h): Supported LBA-Change 00:25:32.718 00:25:32.718 Error Log 00:25:32.718 ========= 00:25:32.718 00:25:32.718 Arbitration 00:25:32.718 =========== 00:25:32.718 Arbitration Burst: 1 00:25:32.718 00:25:32.718 Power Management 00:25:32.718 ================ 00:25:32.718 Number of Power States: 1 00:25:32.718 Current Power State: Power State #0 00:25:32.718 Power State #0: 00:25:32.718 Max Power: 0.00 W 00:25:32.718 Non-Operational State: Operational 00:25:32.718 Entry Latency: Not Reported 00:25:32.718 Exit Latency: Not Reported 00:25:32.718 Relative Read Throughput: 0 00:25:32.718 Relative Read Latency: 0 00:25:32.718 Relative Write Throughput: 0 00:25:32.718 Relative Write Latency: 0 00:25:32.718 Idle Power: Not Reported 00:25:32.718 Active Power: Not Reported 00:25:32.718 Non-Operational Permissive Mode: Not Supported 00:25:32.718 00:25:32.718 Health Information 00:25:32.718 ================== 00:25:32.718 Critical Warnings: 00:25:32.718 Available Spare Space: OK 00:25:32.718 Temperature: OK 00:25:32.718 Device Reliability: OK 00:25:32.718 Read Only: No 00:25:32.718 Volatile Memory Backup: OK 00:25:32.718 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:32.718 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:32.718 Available Spare: 0% 00:25:32.718 Available Spare Threshold: 0% 00:25:32.718 Life Percentage Used:[2024-11-20 10:43:05.047374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x248e690) 00:25:32.718 [2024-11-20 10:43:05.047387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.718 [2024-11-20 10:43:05.047402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0b80, cid 7, qid 0 00:25:32.718 [2024-11-20 10:43:05.047623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.718 [2024-11-20 10:43:05.047629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.718 [2024-11-20 10:43:05.047633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0b80) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047677] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:32.718 [2024-11-20 10:43:05.047686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0100) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.718 [2024-11-20 10:43:05.047698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0280) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.718 [2024-11-20 10:43:05.047708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0400) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.718 [2024-11-20 10:43:05.047718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.718 [2024-11-20 10:43:05.047731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.718 [2024-11-20 10:43:05.047746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.718 [2024-11-20 10:43:05.047758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.718 [2024-11-20 10:43:05.047954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.718 [2024-11-20 10:43:05.047961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.718 [2024-11-20 10:43:05.047964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.047975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.047983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.718 [2024-11-20 10:43:05.047989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.718 [2024-11-20 10:43:05.048004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.718 [2024-11-20 10:43:05.048279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.718 [2024-11-20 10:43:05.048285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.718 [2024-11-20 10:43:05.048289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.048293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.718 [2024-11-20 10:43:05.048298] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:32.718 [2024-11-20 10:43:05.048303] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:32.718 [2024-11-20 10:43:05.048312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.048316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.718 [2024-11-20 10:43:05.048320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.718 [2024-11-20 10:43:05.048326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.718 [2024-11-20 10:43:05.048342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.718 [2024-11-20 10:43:05.048582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.718 [2024-11-20 10:43:05.048589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.048593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.048607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.048621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.048631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.048831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.048839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.048842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.048856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.048864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.048870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.048880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.049085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.049092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.049095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.049109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.049123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.049133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.049389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.049395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.049399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.049412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.049427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.049437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.049639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.049646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.049649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.049663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.049677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.049688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.049893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.049899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.049903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.049916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.049924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.049930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.049940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.050149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.050155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.050166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.050179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.050193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.050204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.050445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.050451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.050455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.050469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.050483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.050493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.050699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.050705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.050709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.050722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.050737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.050747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.050950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.050957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.050960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.050974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.050981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.050988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.050998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.055171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.055180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.055184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.055188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.055198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.055202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.055206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x248e690) 00:25:32.719 [2024-11-20 10:43:05.055212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.719 [2024-11-20 10:43:05.055224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24f0580, cid 3, qid 0 00:25:32.719 [2024-11-20 10:43:05.055456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:32.719 [2024-11-20 10:43:05.055463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:32.719 [2024-11-20 10:43:05.055467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:32.719 [2024-11-20 10:43:05.055471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24f0580) on tqpair=0x248e690 00:25:32.719 [2024-11-20 10:43:05.055478] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:25:32.719 0% 00:25:32.719 Data Units Read: 0 00:25:32.719 Data Units Written: 0 00:25:32.719 Host Read Commands: 0 00:25:32.719 Host Write Commands: 0 00:25:32.719 Controller Busy Time: 0 minutes 00:25:32.720 Power Cycles: 0 00:25:32.720 Power On Hours: 0 hours 00:25:32.720 Unsafe Shutdowns: 0 00:25:32.720 Unrecoverable Media Errors: 0 00:25:32.720 Lifetime Error Log Entries: 0 00:25:32.720 Warning Temperature Time: 0 minutes 00:25:32.720 Critical Temperature Time: 0 minutes 00:25:32.720 00:25:32.720 Number of Queues 00:25:32.720 ================ 00:25:32.720 Number of I/O Submission Queues: 127 00:25:32.720 Number of I/O Completion Queues: 127 00:25:32.720 00:25:32.720 Active Namespaces 00:25:32.720 ================= 00:25:32.720 Namespace ID:1 00:25:32.720 Error Recovery Timeout: Unlimited 00:25:32.720 Command Set Identifier: NVM (00h) 00:25:32.720 Deallocate: Supported 00:25:32.720 Deallocated/Unwritten Error: Not Supported 00:25:32.720 Deallocated Read Value: Unknown 00:25:32.720 Deallocate in Write Zeroes: Not Supported 00:25:32.720 Deallocated Guard Field: 0xFFFF 00:25:32.720 Flush: Supported 00:25:32.720 Reservation: Supported 00:25:32.720 Namespace Sharing Capabilities: Multiple Controllers 00:25:32.720 Size (in LBAs): 131072 (0GiB) 00:25:32.720 Capacity (in LBAs): 131072 (0GiB) 00:25:32.720 Utilization (in LBAs): 131072 (0GiB) 00:25:32.720 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:32.720 EUI64: ABCDEF0123456789 00:25:32.720 UUID: 1e896f07-8d29-491f-bcf6-4ac7d0a8dedc 00:25:32.720 Thin Provisioning: Not Supported 00:25:32.720 Per-NS Atomic Units: Yes 00:25:32.720 Atomic Boundary Size (Normal): 0 00:25:32.720 Atomic Boundary Size (PFail): 0 00:25:32.720 Atomic Boundary Offset: 0 00:25:32.720 Maximum Single Source Range Length: 65535 00:25:32.720 Maximum Copy Length: 65535 00:25:32.720 Maximum Source Range Count: 1 00:25:32.720 NGUID/EUI64 Never Reused: No 00:25:32.720 Namespace Write Protected: No 00:25:32.720 Number of LBA Formats: 1 00:25:32.720 Current LBA Format: LBA Format #00 00:25:32.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:32.720 00:25:32.720 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:32.720 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.720 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.720 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.980 rmmod nvme_tcp 00:25:32.980 rmmod nvme_fabrics 00:25:32.980 rmmod nvme_keyring 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2148906 ']' 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2148906 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2148906 ']' 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2148906 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148906 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148906' 00:25:32.980 killing process with pid 2148906 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2148906 00:25:32.980 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2148906 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:33.241 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.242 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:33.242 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.242 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.242 10:43:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.304 00:25:35.304 real 0m11.833s 00:25:35.304 user 0m9.163s 00:25:35.304 sys 0m6.236s 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:35.304 ************************************ 00:25:35.304 END TEST nvmf_identify 00:25:35.304 ************************************ 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.304 ************************************ 00:25:35.304 START TEST nvmf_perf 00:25:35.304 ************************************ 00:25:35.304 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:35.565 * Looking for test storage... 00:25:35.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:35.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.565 --rc genhtml_branch_coverage=1 00:25:35.565 --rc genhtml_function_coverage=1 00:25:35.565 --rc genhtml_legend=1 00:25:35.565 --rc geninfo_all_blocks=1 00:25:35.565 --rc geninfo_unexecuted_blocks=1 00:25:35.565 00:25:35.565 ' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:35.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.565 --rc genhtml_branch_coverage=1 00:25:35.565 --rc genhtml_function_coverage=1 00:25:35.565 --rc genhtml_legend=1 00:25:35.565 --rc geninfo_all_blocks=1 00:25:35.565 --rc geninfo_unexecuted_blocks=1 00:25:35.565 00:25:35.565 ' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:35.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.565 --rc genhtml_branch_coverage=1 00:25:35.565 --rc genhtml_function_coverage=1 00:25:35.565 --rc genhtml_legend=1 00:25:35.565 --rc geninfo_all_blocks=1 00:25:35.565 --rc geninfo_unexecuted_blocks=1 00:25:35.565 00:25:35.565 ' 00:25:35.565 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:35.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.565 --rc genhtml_branch_coverage=1 00:25:35.566 --rc genhtml_function_coverage=1 00:25:35.566 --rc genhtml_legend=1 00:25:35.566 --rc geninfo_all_blocks=1 00:25:35.566 --rc geninfo_unexecuted_blocks=1 00:25:35.566 00:25:35.566 ' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.566 10:43:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.704 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:43.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:43.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:43.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:43.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:25:43.705 00:25:43.705 --- 10.0.0.2 ping statistics --- 00:25:43.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.705 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:25:43.705 00:25:43.705 --- 10.0.0.1 ping statistics --- 00:25:43.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.705 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2153418 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2153418 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2153418 ']' 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.705 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:43.705 [2024-11-20 10:43:15.429689] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:25:43.705 [2024-11-20 10:43:15.429756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.705 [2024-11-20 10:43:15.505930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.705 [2024-11-20 10:43:15.553742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.705 [2024-11-20 10:43:15.553795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.705 [2024-11-20 10:43:15.553802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.705 [2024-11-20 10:43:15.553808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.705 [2024-11-20 10:43:15.553813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.705 [2024-11-20 10:43:15.555671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.705 [2024-11-20 10:43:15.555832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.706 [2024-11-20 10:43:15.555993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.706 [2024-11-20 10:43:15.555996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:43.706 10:43:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:43.966 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:43.966 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:44.227 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:44.227 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:44.487 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:44.487 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:44.487 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:44.487 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:44.487 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:44.487 [2024-11-20 10:43:16.825711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.747 10:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.747 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:44.747 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.007 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:45.008 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:45.268 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.268 [2024-11-20 10:43:17.633441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.528 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.528 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:45.528 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:45.528 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:45.528 10:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:46.909 Initializing NVMe Controllers 00:25:46.909 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:46.909 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:46.909 Initialization complete. Launching workers. 00:25:46.909 ======================================================== 00:25:46.909 Latency(us) 00:25:46.910 Device Information : IOPS MiB/s Average min max 00:25:46.910 PCIE (0000:65:00.0) NSID 1 from core 0: 78631.97 307.16 406.41 13.45 4964.00 00:25:46.910 ======================================================== 00:25:46.910 Total : 78631.97 307.16 406.41 13.45 4964.00 00:25:46.910 00:25:46.910 10:43:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.294 Initializing NVMe Controllers 00:25:48.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:48.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:48.294 Initialization complete. Launching workers. 00:25:48.294 ======================================================== 00:25:48.294 Latency(us) 00:25:48.294 Device Information : IOPS MiB/s Average min max 00:25:48.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.37 10810.11 227.07 46219.62 00:25:48.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16468.92 7956.79 47902.52 00:25:48.294 ======================================================== 00:25:48.294 Total : 157.00 0.61 13008.76 227.07 47902.52 00:25:48.294 00:25:48.294 10:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:49.678 Initializing NVMe Controllers 00:25:49.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:49.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:49.678 Initialization complete. Launching workers. 00:25:49.678 ======================================================== 00:25:49.678 Latency(us) 00:25:49.678 Device Information : IOPS MiB/s Average min max 00:25:49.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11809.17 46.13 2721.64 496.64 10169.84 00:25:49.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3641.44 14.22 8845.55 7240.90 16161.55 00:25:49.678 ======================================================== 00:25:49.678 Total : 15450.61 60.35 4164.93 496.64 16161.55 00:25:49.678 00:25:49.678 10:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:49.678 10:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:49.678 10:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:52.220 Initializing NVMe Controllers 00:25:52.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.220 Controller IO queue size 128, less than required. 00:25:52.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:52.220 Controller IO queue size 128, less than required. 00:25:52.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:52.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:52.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:52.220 Initialization complete. Launching workers. 00:25:52.220 ======================================================== 00:25:52.220 Latency(us) 00:25:52.220 Device Information : IOPS MiB/s Average min max 00:25:52.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1793.22 448.31 72714.40 32655.55 137282.60 00:25:52.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.58 145.90 226763.50 71859.15 354824.21 00:25:52.220 ======================================================== 00:25:52.220 Total : 2376.81 594.20 110538.52 32655.55 354824.21 00:25:52.220 00:25:52.220 10:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:52.220 No valid NVMe controllers or AIO or URING devices found 00:25:52.220 Initializing NVMe Controllers 00:25:52.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.220 Controller IO queue size 128, less than required. 00:25:52.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:52.220 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:52.220 Controller IO queue size 128, less than required. 00:25:52.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:52.220 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:52.220 WARNING: Some requested NVMe devices were skipped 00:25:52.220 10:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:54.760 Initializing NVMe Controllers 00:25:54.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:54.760 Controller IO queue size 128, less than required. 00:25:54.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.760 Controller IO queue size 128, less than required. 00:25:54.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:54.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:54.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:54.760 Initialization complete. Launching workers. 00:25:54.760 00:25:54.760 ==================== 00:25:54.760 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:54.760 TCP transport: 00:25:54.760 polls: 33533 00:25:54.760 idle_polls: 21980 00:25:54.760 sock_completions: 11553 00:25:54.760 nvme_completions: 6873 00:25:54.760 submitted_requests: 10316 00:25:54.760 queued_requests: 1 00:25:54.760 00:25:54.760 ==================== 00:25:54.760 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:54.760 TCP transport: 00:25:54.760 polls: 33675 00:25:54.760 idle_polls: 19458 00:25:54.760 sock_completions: 14217 00:25:54.760 nvme_completions: 7685 00:25:54.760 submitted_requests: 11498 00:25:54.760 queued_requests: 1 00:25:54.760 ======================================================== 00:25:54.760 Latency(us) 00:25:54.760 Device Information : IOPS MiB/s Average min max 00:25:54.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1717.58 429.40 76306.37 41248.53 128702.85 00:25:54.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1920.53 480.13 67301.29 29952.81 103419.57 00:25:54.760 ======================================================== 00:25:54.760 Total : 3638.11 909.53 71552.66 29952.81 128702.85 00:25:54.760 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.760 10:43:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.760 rmmod nvme_tcp 00:25:54.760 rmmod nvme_fabrics 00:25:54.760 rmmod nvme_keyring 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2153418 ']' 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2153418 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2153418 ']' 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2153418 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153418 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153418' 00:25:54.760 killing process with pid 2153418 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2153418 00:25:54.760 10:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2153418 00:25:57.301 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.301 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.301 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.302 10:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.215 10:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.215 00:25:59.215 real 0m23.542s 00:25:59.215 user 0m55.433s 00:25:59.215 sys 0m8.597s 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 ************************************ 00:25:59.216 END TEST nvmf_perf 00:25:59.216 ************************************ 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.216 ************************************ 00:25:59.216 START TEST nvmf_fio_host 00:25:59.216 ************************************ 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:59.216 * Looking for test storage... 00:25:59.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.216 --rc genhtml_branch_coverage=1 00:25:59.216 --rc genhtml_function_coverage=1 00:25:59.216 --rc genhtml_legend=1 00:25:59.216 --rc geninfo_all_blocks=1 00:25:59.216 --rc geninfo_unexecuted_blocks=1 00:25:59.216 00:25:59.216 ' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.216 --rc genhtml_branch_coverage=1 00:25:59.216 --rc genhtml_function_coverage=1 00:25:59.216 --rc genhtml_legend=1 00:25:59.216 --rc geninfo_all_blocks=1 00:25:59.216 --rc geninfo_unexecuted_blocks=1 00:25:59.216 00:25:59.216 ' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.216 --rc genhtml_branch_coverage=1 00:25:59.216 --rc genhtml_function_coverage=1 00:25:59.216 --rc genhtml_legend=1 00:25:59.216 --rc geninfo_all_blocks=1 00:25:59.216 --rc geninfo_unexecuted_blocks=1 00:25:59.216 00:25:59.216 ' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:59.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.216 --rc genhtml_branch_coverage=1 00:25:59.216 --rc genhtml_function_coverage=1 00:25:59.216 --rc genhtml_legend=1 00:25:59.216 --rc geninfo_all_blocks=1 00:25:59.216 --rc geninfo_unexecuted_blocks=1 00:25:59.216 00:25:59.216 ' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.216 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.217 10:43:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:07.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:07.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:07.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:07.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:07.356 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:07.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:26:07.357 00:26:07.357 --- 10.0.0.2 ping statistics --- 00:26:07.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.357 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:26:07.357 00:26:07.357 --- 10.0.0.1 ping statistics --- 00:26:07.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.357 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:07.357 10:43:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2160308 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2160308 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2160308 ']' 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.357 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.357 [2024-11-20 10:43:39.083198] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:26:07.357 [2024-11-20 10:43:39.083270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.357 [2024-11-20 10:43:39.185221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:07.357 [2024-11-20 10:43:39.239033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.357 [2024-11-20 10:43:39.239085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.357 [2024-11-20 10:43:39.239093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.357 [2024-11-20 10:43:39.239106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.357 [2024-11-20 10:43:39.239112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.357 [2024-11-20 10:43:39.241154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.357 [2024-11-20 10:43:39.241451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.357 [2024-11-20 10:43:39.241453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.357 [2024-11-20 10:43:39.241229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.617 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.617 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:07.617 10:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:07.877 [2024-11-20 10:43:40.066817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.877 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:07.877 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.877 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.877 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:08.137 Malloc1 00:26:08.137 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.397 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:08.397 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.657 [2024-11-20 10:43:40.936188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.657 10:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:08.917 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:08.918 10:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:09.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:09.178 fio-3.35 00:26:09.178 Starting 1 thread 00:26:11.723 00:26:11.723 test: (groupid=0, jobs=1): err= 0: pid=2161025: Wed Nov 20 10:43:43 2024 00:26:11.723 read: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.4MiB/2004msec) 00:26:11.723 slat (usec): min=2, max=317, avg= 2.17, stdev= 2.82 00:26:11.723 clat (usec): min=3888, max=9345, avg=5977.37, stdev=1218.47 00:26:11.723 lat (usec): min=3891, max=9347, avg=5979.55, stdev=1218.48 00:26:11.723 clat percentiles (usec): 00:26:11.723 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:26:11.723 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5604], 00:26:11.723 | 70.00th=[ 6063], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8291], 00:26:11.723 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9110], 99.95th=[ 9110], 00:26:11.723 | 99.99th=[ 9241] 00:26:11.723 bw ( KiB/s): min=35064, max=55904, per=99.90%, avg=47178.00, stdev=9500.95, samples=4 00:26:11.723 iops : min= 8766, max=13976, avg=11794.50, stdev=2375.24, samples=4 00:26:11.723 write: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(91.9MiB/2004msec); 0 zone resets 00:26:11.723 slat (usec): min=2, max=274, avg= 2.27, stdev= 1.97 00:26:11.723 clat (usec): min=2995, max=8090, avg=4822.00, stdev=976.99 00:26:11.723 lat (usec): min=3014, max=8092, avg=4824.28, stdev=977.03 00:26:11.723 clat percentiles (usec): 00:26:11.723 | 1.00th=[ 3556], 5.00th=[ 3818], 10.00th=[ 3916], 20.00th=[ 4080], 00:26:11.723 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4555], 00:26:11.723 | 70.00th=[ 4948], 80.00th=[ 6063], 90.00th=[ 6390], 95.00th=[ 6652], 00:26:11.723 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7439], 99.95th=[ 7504], 00:26:11.723 | 99.99th=[ 7963] 00:26:11.723 bw ( KiB/s): min=35896, max=54976, per=99.96%, avg=46962.00, stdev=8975.12, samples=4 00:26:11.723 iops : min= 8974, max=13744, avg=11740.50, stdev=2243.78, samples=4 00:26:11.723 lat (msec) : 4=7.57%, 10=92.43% 00:26:11.723 cpu : usr=68.45%, sys=29.96%, ctx=17, majf=0, minf=17 00:26:11.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:11.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:11.723 issued rwts: total=23660,23537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:11.723 00:26:11.723 Run status group 0 (all jobs): 00:26:11.723 READ: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.4MiB (96.9MB), run=2004-2004msec 00:26:11.723 WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=91.9MiB (96.4MB), run=2004-2004msec 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:11.723 10:43:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:11.723 10:43:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:11.723 10:43:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:11.723 10:43:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:11.723 10:43:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:12.291 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:12.291 fio-3.35 00:26:12.291 Starting 1 thread 00:26:14.836 00:26:14.836 test: (groupid=0, jobs=1): err= 0: pid=2161660: Wed Nov 20 10:43:46 2024 00:26:14.836 read: IOPS=9477, BW=148MiB/s (155MB/s)(297MiB/2007msec) 00:26:14.836 slat (usec): min=3, max=116, avg= 3.60, stdev= 1.57 00:26:14.836 clat (usec): min=984, max=14764, avg=8304.53, stdev=1921.33 00:26:14.836 lat (usec): min=988, max=14767, avg=8308.14, stdev=1921.44 00:26:14.836 clat percentiles (usec): 00:26:14.836 | 1.00th=[ 4359], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6521], 00:26:14.836 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8717], 00:26:14.836 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11076], 95.00th=[11469], 00:26:14.836 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13698], 99.95th=[14091], 00:26:14.836 | 99.99th=[14746] 00:26:14.836 bw ( KiB/s): min=71136, max=82304, per=49.23%, avg=74656.00, stdev=5169.82, samples=4 00:26:14.836 iops : min= 4446, max= 5144, avg=4666.00, stdev=323.11, samples=4 00:26:14.836 write: IOPS=5734, BW=89.6MiB/s (94.0MB/s)(153MiB/1706msec); 0 zone resets 00:26:14.836 slat (usec): min=39, max=359, avg=40.90, stdev= 6.89 00:26:14.836 clat (usec): min=2617, max=14493, avg=9092.41, stdev=1304.60 00:26:14.836 lat (usec): min=2657, max=14599, avg=9133.32, stdev=1306.11 00:26:14.836 clat percentiles (usec): 00:26:14.836 | 1.00th=[ 6390], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7963], 00:26:14.836 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:26:14.836 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:26:14.836 | 99.00th=[12780], 99.50th=[13304], 99.90th=[13960], 99.95th=[14353], 00:26:14.836 | 99.99th=[14484] 00:26:14.837 bw ( KiB/s): min=72992, max=85632, per=85.06%, avg=78040.00, stdev=5374.37, samples=4 00:26:14.837 iops : min= 4562, max= 5352, avg=4877.50, stdev=335.90, samples=4 00:26:14.837 lat (usec) : 1000=0.01% 00:26:14.837 lat (msec) : 2=0.02%, 4=0.46%, 10=77.71%, 20=21.81% 00:26:14.837 cpu : usr=84.05%, sys=14.51%, ctx=12, majf=0, minf=39 00:26:14.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:14.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:14.837 issued rwts: total=19022,9783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:14.837 00:26:14.837 Run status group 0 (all jobs): 00:26:14.837 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (312MB), run=2007-2007msec 00:26:14.837 WRITE: bw=89.6MiB/s (94.0MB/s), 89.6MiB/s-89.6MiB/s (94.0MB/s-94.0MB/s), io=153MiB (160MB), run=1706-1706msec 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.837 rmmod nvme_tcp 00:26:14.837 rmmod nvme_fabrics 00:26:14.837 rmmod nvme_keyring 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2160308 ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2160308 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2160308 ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2160308 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160308 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160308' 00:26:14.837 killing process with pid 2160308 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2160308 00:26:14.837 10:43:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2160308 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.837 10:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.384 00:26:17.384 real 0m17.931s 00:26:17.384 user 1m4.172s 00:26:17.384 sys 0m7.995s 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.384 ************************************ 00:26:17.384 END TEST nvmf_fio_host 00:26:17.384 ************************************ 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.384 ************************************ 00:26:17.384 START TEST nvmf_failover 00:26:17.384 ************************************ 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:17.384 * Looking for test storage... 00:26:17.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.384 --rc genhtml_branch_coverage=1 00:26:17.384 --rc genhtml_function_coverage=1 00:26:17.384 --rc genhtml_legend=1 00:26:17.384 --rc geninfo_all_blocks=1 00:26:17.384 --rc geninfo_unexecuted_blocks=1 00:26:17.384 00:26:17.384 ' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.384 --rc genhtml_branch_coverage=1 00:26:17.384 --rc genhtml_function_coverage=1 00:26:17.384 --rc genhtml_legend=1 00:26:17.384 --rc geninfo_all_blocks=1 00:26:17.384 --rc geninfo_unexecuted_blocks=1 00:26:17.384 00:26:17.384 ' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.384 --rc genhtml_branch_coverage=1 00:26:17.384 --rc genhtml_function_coverage=1 00:26:17.384 --rc genhtml_legend=1 00:26:17.384 --rc geninfo_all_blocks=1 00:26:17.384 --rc geninfo_unexecuted_blocks=1 00:26:17.384 00:26:17.384 ' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.384 --rc genhtml_branch_coverage=1 00:26:17.384 --rc genhtml_function_coverage=1 00:26:17.384 --rc genhtml_legend=1 00:26:17.384 --rc geninfo_all_blocks=1 00:26:17.384 --rc geninfo_unexecuted_blocks=1 00:26:17.384 00:26:17.384 ' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.384 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.385 10:43:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:25.527 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:25.527 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:25.527 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:25.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.527 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:25.528 00:26:25.528 --- 10.0.0.2 ping statistics --- 00:26:25.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.528 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:25.528 00:26:25.528 --- 10.0.0.1 ping statistics --- 00:26:25.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.528 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2166278 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2166278 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2166278 ']' 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.528 10:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:25.528 [2024-11-20 10:43:56.997770] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:26:25.528 [2024-11-20 10:43:56.997834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.528 [2024-11-20 10:43:57.095797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:25.528 [2024-11-20 10:43:57.148141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.528 [2024-11-20 10:43:57.148196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.528 [2024-11-20 10:43:57.148204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.528 [2024-11-20 10:43:57.148211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.528 [2024-11-20 10:43:57.148217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.528 [2024-11-20 10:43:57.150025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.528 [2024-11-20 10:43:57.150195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.528 [2024-11-20 10:43:57.150256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.528 10:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:25.790 [2024-11-20 10:43:58.038835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.790 10:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:26.050 Malloc0 00:26:26.050 10:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.341 10:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:26.341 10:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.603 [2024-11-20 10:43:58.864297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.604 10:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:26.865 [2024-11-20 10:43:59.060704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.865 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:27.125 [2024-11-20 10:43:59.245251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2166868 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2166868 /var/tmp/bdevperf.sock 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2166868 ']' 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.125 10:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.126 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.126 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:28.126 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:28.126 NVMe0n1 00:26:28.126 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:28.386 00:26:28.386 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2167064 00:26:28.386 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.386 10:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:29.770 10:44:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.770 [2024-11-20 10:44:01.865004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 [2024-11-20 10:44:01.865094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19204f0 is same with the state(6) to be set 00:26:29.770 10:44:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:33.065 10:44:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:33.065 00:26:33.065 10:44:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:33.065 [2024-11-20 10:44:05.372621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.065 [2024-11-20 10:44:05.372858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.372999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 [2024-11-20 10:44:05.373170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921040 is same with the state(6) to be set 00:26:33.066 10:44:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:36.363 10:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.363 [2024-11-20 10:44:08.562625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.363 10:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:37.304 10:44:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:37.564 [2024-11-20 10:44:09.753862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.564 [2024-11-20 10:44:09.753962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.753998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 [2024-11-20 10:44:09.754040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e64c0 is same with the state(6) to be set 00:26:37.565 10:44:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2167064 00:26:44.441 { 00:26:44.441 "results": [ 00:26:44.441 { 00:26:44.441 "job": "NVMe0n1", 00:26:44.441 "core_mask": "0x1", 00:26:44.441 "workload": "verify", 00:26:44.441 "status": "finished", 00:26:44.441 "verify_range": { 00:26:44.441 "start": 0, 00:26:44.441 "length": 16384 00:26:44.441 }, 00:26:44.441 "queue_depth": 128, 00:26:44.441 "io_size": 4096, 00:26:44.441 "runtime": 15.007273, 00:26:44.441 "iops": 12398.321800369727, 00:26:44.441 "mibps": 48.43094453269425, 00:26:44.441 "io_failed": 18212, 00:26:44.441 "io_timeout": 0, 00:26:44.441 "avg_latency_us": 9383.265923819128, 00:26:44.441 "min_latency_us": 539.3066666666666, 00:26:44.441 "max_latency_us": 21845.333333333332 00:26:44.441 } 00:26:44.441 ], 00:26:44.441 "core_count": 1 00:26:44.441 } 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2166868 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2166868 ']' 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2166868 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166868 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166868' 00:26:44.441 killing process with pid 2166868 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2166868 00:26:44.441 10:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2166868 00:26:44.441 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.441 [2024-11-20 10:43:59.314179] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:26:44.441 [2024-11-20 10:43:59.314237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166868 ] 00:26:44.441 [2024-11-20 10:43:59.401138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.441 [2024-11-20 10:43:59.436837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.441 Running I/O for 15 seconds... 00:26:44.441 11039.00 IOPS, 43.12 MiB/s [2024-11-20T09:44:16.817Z] [2024-11-20 10:44:01.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.441 [2024-11-20 10:44:01.868274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.441 [2024-11-20 10:44:01.868281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.442 [2024-11-20 10:44:01.868942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.442 [2024-11-20 10:44:01.868949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.868958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.868966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.868975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.868982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.868998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.443 [2024-11-20 10:44:01.869286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.443 [2024-11-20 10:44:01.869588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:26:44.443 [2024-11-20 10:44:01.869596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.443 [2024-11-20 10:44:01.869603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.443 [2024-11-20 10:44:01.869609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.869978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.869984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.869990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.869997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:26:44.444 [2024-11-20 10:44:01.870214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.444 [2024-11-20 10:44:01.870221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.444 [2024-11-20 10:44:01.870227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.444 [2024-11-20 10:44:01.870233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.870399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.870407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.870412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.870418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.880856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.880887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.880895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.880903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.880912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.880919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.880925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.880931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.880947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.880953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.880966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.880974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.880979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.880985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.445 [2024-11-20 10:44:01.881201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.445 [2024-11-20 10:44:01.881207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:26:44.445 [2024-11-20 10:44:01.881214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.445 [2024-11-20 10:44:01.881257] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:44.445 [2024-11-20 10:44:01.881289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.445 [2024-11-20 10:44:01.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:01.881307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.446 [2024-11-20 10:44:01.881315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:01.881323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.446 [2024-11-20 10:44:01.881330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:01.881339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.446 [2024-11-20 10:44:01.881346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:01.881361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:44.446 [2024-11-20 10:44:01.881406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1943d70 (9): Bad file descriptor 00:26:44.446 [2024-11-20 10:44:01.884900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:44.446 [2024-11-20 10:44:02.041519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:44.446 10546.00 IOPS, 41.20 MiB/s [2024-11-20T09:44:16.822Z] 10759.00 IOPS, 42.03 MiB/s [2024-11-20T09:44:16.822Z] 11250.25 IOPS, 43.95 MiB/s [2024-11-20T09:44:16.822Z] [2024-11-20 10:44:05.373649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.373992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.446 [2024-11-20 10:44:05.374075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.446 [2024-11-20 10:44:05.374081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.447 [2024-11-20 10:44:05.374536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.447 [2024-11-20 10:44:05.374541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.448 [2024-11-20 10:44:05.374553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.448 [2024-11-20 10:44:05.374564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.448 [2024-11-20 10:44:05.374575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.448 [2024-11-20 10:44:05.374586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.448 [2024-11-20 10:44:05.374985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.448 [2024-11-20 10:44:05.374992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.374997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:05.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.449 [2024-11-20 10:44:05.375143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76016 len:8 PRP1 0x0 PRP2 0x0 00:26:44.449 [2024-11-20 10:44:05.375148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.449 [2024-11-20 10:44:05.375162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.449 [2024-11-20 10:44:05.375167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76024 len:8 PRP1 0x0 PRP2 0x0 00:26:44.449 [2024-11-20 10:44:05.375172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.449 [2024-11-20 10:44:05.375181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.449 [2024-11-20 10:44:05.375186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:26:44.449 [2024-11-20 10:44:05.375191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375224] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:44.449 [2024-11-20 10:44:05.375240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:05.375246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:05.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:05.375270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:05.375281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:05.375287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:44.449 [2024-11-20 10:44:05.377730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:44.449 [2024-11-20 10:44:05.377750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1943d70 (9): Bad file descriptor 00:26:44.449 [2024-11-20 10:44:05.404905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:44.449 11537.40 IOPS, 45.07 MiB/s [2024-11-20T09:44:16.825Z] 11779.17 IOPS, 46.01 MiB/s [2024-11-20T09:44:16.825Z] 11967.43 IOPS, 46.75 MiB/s [2024-11-20T09:44:16.825Z] 12090.50 IOPS, 47.23 MiB/s [2024-11-20T09:44:16.825Z] [2024-11-20 10:44:09.754406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:09.754435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:09.754449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:09.754460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.449 [2024-11-20 10:44:09.754471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1943d70 is same with the state(6) to be set 00:26:44.449 [2024-11-20 10:44:09.754522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.449 [2024-11-20 10:44:09.754661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.449 [2024-11-20 10:44:09.754666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.754990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.754997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.450 [2024-11-20 10:44:09.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.450 [2024-11-20 10:44:09.755077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.451 [2024-11-20 10:44:09.755531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.451 [2024-11-20 10:44:09.755537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.452 [2024-11-20 10:44:09.755802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.452 [2024-11-20 10:44:09.755985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.452 [2024-11-20 10:44:09.755999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.453 [2024-11-20 10:44:09.756004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.453 [2024-11-20 10:44:09.756009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:26:44.453 [2024-11-20 10:44:09.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.453 [2024-11-20 10:44:09.756050] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:44.453 [2024-11-20 10:44:09.756058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:44.453 [2024-11-20 10:44:09.758456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:44.453 [2024-11-20 10:44:09.758477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1943d70 (9): Bad file descriptor 00:26:44.453 12154.44 IOPS, 47.48 MiB/s [2024-11-20T09:44:16.829Z] [2024-11-20 10:44:09.942122] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:44.453 12080.90 IOPS, 47.19 MiB/s [2024-11-20T09:44:16.829Z] 12167.55 IOPS, 47.53 MiB/s [2024-11-20T09:44:16.829Z] 12232.58 IOPS, 47.78 MiB/s [2024-11-20T09:44:16.829Z] 12292.69 IOPS, 48.02 MiB/s [2024-11-20T09:44:16.829Z] 12348.43 IOPS, 48.24 MiB/s [2024-11-20T09:44:16.829Z] 12395.87 IOPS, 48.42 MiB/s 00:26:44.453 Latency(us) 00:26:44.453 [2024-11-20T09:44:16.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.453 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:44.453 Verification LBA range: start 0x0 length 0x4000 00:26:44.453 NVMe0n1 : 15.01 12398.32 48.43 1213.54 0.00 9383.27 539.31 21845.33 00:26:44.453 [2024-11-20T09:44:16.829Z] =================================================================================================================== 00:26:44.453 [2024-11-20T09:44:16.829Z] Total : 12398.32 48.43 1213.54 0.00 9383.27 539.31 21845.33 00:26:44.453 Received shutdown signal, test time was about 15.000000 seconds 00:26:44.453 00:26:44.453 Latency(us) 00:26:44.453 [2024-11-20T09:44:16.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.453 [2024-11-20T09:44:16.829Z] =================================================================================================================== 00:26:44.453 [2024-11-20T09:44:16.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2169924 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2169924 /var/tmp/bdevperf.sock 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2169924 ']' 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:44.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.453 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:44.714 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.714 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:44.714 10:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:44.714 [2024-11-20 10:44:17.034705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:44.714 10:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:44.975 [2024-11-20 10:44:17.215123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:44.975 10:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:45.235 NVMe0n1 00:26:45.235 10:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:45.805 00:26:45.805 10:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:46.065 00:26:46.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:46.065 10:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:46.326 10:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:46.326 10:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:49.626 10:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:49.626 10:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:49.626 10:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2171209 00:26:49.626 10:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.626 10:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2171209 00:26:51.011 { 00:26:51.011 "results": [ 00:26:51.011 { 00:26:51.011 "job": "NVMe0n1", 00:26:51.011 "core_mask": "0x1", 00:26:51.011 "workload": "verify", 00:26:51.011 "status": "finished", 00:26:51.011 "verify_range": { 00:26:51.011 "start": 0, 00:26:51.011 "length": 16384 00:26:51.011 }, 00:26:51.011 "queue_depth": 128, 00:26:51.011 "io_size": 4096, 00:26:51.011 "runtime": 1.00562, 00:26:51.011 "iops": 13038.722380223146, 00:26:51.011 "mibps": 50.93250929774666, 00:26:51.011 "io_failed": 0, 00:26:51.011 "io_timeout": 0, 00:26:51.011 "avg_latency_us": 9780.680707748626, 00:26:51.011 "min_latency_us": 1788.5866666666666, 00:26:51.011 "max_latency_us": 10649.6 00:26:51.011 } 00:26:51.011 ], 00:26:51.011 "core_count": 1 00:26:51.011 } 00:26:51.011 10:44:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:51.011 [2024-11-20 10:44:16.092984] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:26:51.011 [2024-11-20 10:44:16.093045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169924 ] 00:26:51.011 [2024-11-20 10:44:16.177969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.011 [2024-11-20 10:44:16.207963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.011 [2024-11-20 10:44:18.653859] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:51.011 [2024-11-20 10:44:18.653899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.011 [2024-11-20 10:44:18.653908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.011 [2024-11-20 10:44:18.653914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.011 [2024-11-20 10:44:18.653920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.011 [2024-11-20 10:44:18.653926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.011 [2024-11-20 10:44:18.653931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.011 [2024-11-20 10:44:18.653936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.011 [2024-11-20 10:44:18.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.011 [2024-11-20 10:44:18.653951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:51.011 [2024-11-20 10:44:18.653970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:51.011 [2024-11-20 10:44:18.653981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1924d70 (9): Bad file descriptor 00:26:51.011 [2024-11-20 10:44:18.756377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:51.011 Running I/O for 1 seconds... 00:26:51.011 12984.00 IOPS, 50.72 MiB/s 00:26:51.011 Latency(us) 00:26:51.011 [2024-11-20T09:44:23.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:51.011 Verification LBA range: start 0x0 length 0x4000 00:26:51.011 NVMe0n1 : 1.01 13038.72 50.93 0.00 0.00 9780.68 1788.59 10649.60 00:26:51.011 [2024-11-20T09:44:23.387Z] =================================================================================================================== 00:26:51.011 [2024-11-20T09:44:23.387Z] Total : 13038.72 50.93 0.00 0.00 9780.68 1788.59 10649.60 00:26:51.011 10:44:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:51.011 10:44:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:51.011 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:51.011 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:51.011 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:51.272 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:51.533 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2169924 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2169924 ']' 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2169924 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169924 00:26:54.861 10:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169924' 00:26:54.861 killing process with pid 2169924 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2169924 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2169924 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:54.861 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.122 rmmod nvme_tcp 00:26:55.122 rmmod nvme_fabrics 00:26:55.122 rmmod nvme_keyring 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2166278 ']' 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2166278 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2166278 ']' 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2166278 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166278 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166278' 00:26:55.122 killing process with pid 2166278 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2166278 00:26:55.122 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2166278 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.384 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.296 00:26:57.296 real 0m40.393s 00:26:57.296 user 2m4.324s 00:26:57.296 sys 0m8.746s 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:57.296 ************************************ 00:26:57.296 END TEST nvmf_failover 00:26:57.296 ************************************ 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.296 10:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.557 ************************************ 00:26:57.557 START TEST nvmf_host_discovery 00:26:57.557 ************************************ 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:57.557 * Looking for test storage... 00:26:57.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:57.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.557 --rc genhtml_branch_coverage=1 00:26:57.557 --rc genhtml_function_coverage=1 00:26:57.557 --rc genhtml_legend=1 00:26:57.557 --rc geninfo_all_blocks=1 00:26:57.557 --rc geninfo_unexecuted_blocks=1 00:26:57.557 00:26:57.557 ' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:57.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.557 --rc genhtml_branch_coverage=1 00:26:57.557 --rc genhtml_function_coverage=1 00:26:57.557 --rc genhtml_legend=1 00:26:57.557 --rc geninfo_all_blocks=1 00:26:57.557 --rc geninfo_unexecuted_blocks=1 00:26:57.557 00:26:57.557 ' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:57.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.557 --rc genhtml_branch_coverage=1 00:26:57.557 --rc genhtml_function_coverage=1 00:26:57.557 --rc genhtml_legend=1 00:26:57.557 --rc geninfo_all_blocks=1 00:26:57.557 --rc geninfo_unexecuted_blocks=1 00:26:57.557 00:26:57.557 ' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:57.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.557 --rc genhtml_branch_coverage=1 00:26:57.557 --rc genhtml_function_coverage=1 00:26:57.557 --rc genhtml_legend=1 00:26:57.557 --rc geninfo_all_blocks=1 00:26:57.557 --rc geninfo_unexecuted_blocks=1 00:26:57.557 00:26:57.557 ' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.557 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.818 10:44:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.960 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.960 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.960 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:27:05.961 00:27:05.961 --- 10.0.0.2 ping statistics --- 00:27:05.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.961 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:27:05.961 00:27:05.961 --- 10.0.0.1 ping statistics --- 00:27:05.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.961 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2176324 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2176324 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2176324 ']' 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.961 10:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.961 [2024-11-20 10:44:37.505513] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:27:05.961 [2024-11-20 10:44:37.505581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.961 [2024-11-20 10:44:37.603567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.961 [2024-11-20 10:44:37.654028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.961 [2024-11-20 10:44:37.654078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.961 [2024-11-20 10:44:37.654087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.961 [2024-11-20 10:44:37.654094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.961 [2024-11-20 10:44:37.654100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.961 [2024-11-20 10:44:37.654848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.961 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.961 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:05.961 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.961 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.961 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 [2024-11-20 10:44:38.368139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 [2024-11-20 10:44:38.380394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 null0 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 null1 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2176609 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2176609 /tmp/host.sock 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2176609 ']' 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:06.222 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.222 10:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.222 [2024-11-20 10:44:38.476270] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:27:06.222 [2024-11-20 10:44:38.476337] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176609 ] 00:27:06.222 [2024-11-20 10:44:38.568591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.483 [2024-11-20 10:44:38.622153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.054 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.054 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.055 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 [2024-11-20 10:44:39.651510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:07.316 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:07.577 10:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:08.148 [2024-11-20 10:44:40.367086] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.148 [2024-11-20 10:44:40.367107] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.148 [2024-11-20 10:44:40.367121] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.148 [2024-11-20 10:44:40.454383] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:08.148 [2024-11-20 10:44:40.515123] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:08.148 [2024-11-20 10:44:40.516062] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde3780:1 started. 00:27:08.148 [2024-11-20 10:44:40.517675] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:08.148 [2024-11-20 10:44:40.517693] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:08.407 [2024-11-20 10:44:40.525649] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde3780 was disconnected and freed. delete nvme_qpair. 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:08.668 10:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.668 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:08.929 [2024-11-20 10:44:41.242187] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xde3b20:1 started. 00:27:08.929 [2024-11-20 10:44:41.247393] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xde3b20 was disconnected and freed. delete nvme_qpair. 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:08.929 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.189 [2024-11-20 10:44:41.331821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:09.189 [2024-11-20 10:44:41.332759] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:09.189 [2024-11-20 10:44:41.332779] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:09.189 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:09.190 [2024-11-20 10:44:41.419500] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:09.190 10:44:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:09.450 [2024-11-20 10:44:41.728065] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:09.450 [2024-11-20 10:44:41.728102] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:09.451 [2024-11-20 10:44:41.728111] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:09.451 [2024-11-20 10:44:41.728116] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:10.393 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.394 [2024-11-20 10:44:42.611539] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:10.394 [2024-11-20 10:44:42.611560] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:10.394 [2024-11-20 10:44:42.616772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.394 [2024-11-20 10:44:42.616791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.394 [2024-11-20 10:44:42.616801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.394 [2024-11-20 10:44:42.616808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.394 [2024-11-20 10:44:42.616816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.394 [2024-11-20 10:44:42.616823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.394 [2024-11-20 10:44:42.616836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.394 [2024-11-20 10:44:42.616845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.394 [2024-11-20 10:44:42.616852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:10.394 [2024-11-20 10:44:42.626785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.394 [2024-11-20 10:44:42.636819] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.394 [2024-11-20 10:44:42.636833] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.394 [2024-11-20 10:44:42.636838] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.394 [2024-11-20 10:44:42.636844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.394 [2024-11-20 10:44:42.636862] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.394 [2024-11-20 10:44:42.637424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.394 [2024-11-20 10:44:42.637465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.394 [2024-11-20 10:44:42.637476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.394 [2024-11-20 10:44:42.637495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.394 [2024-11-20 10:44:42.637508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.394 [2024-11-20 10:44:42.637516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.394 [2024-11-20 10:44:42.637524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.394 [2024-11-20 10:44:42.637532] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.394 [2024-11-20 10:44:42.637538] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.394 [2024-11-20 10:44:42.637543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.394 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.394 [2024-11-20 10:44:42.646896] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.394 [2024-11-20 10:44:42.646912] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.394 [2024-11-20 10:44:42.646919] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.394 [2024-11-20 10:44:42.646925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.394 [2024-11-20 10:44:42.646942] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.394 [2024-11-20 10:44:42.647377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.394 [2024-11-20 10:44:42.647417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.394 [2024-11-20 10:44:42.647428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.394 [2024-11-20 10:44:42.647447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.394 [2024-11-20 10:44:42.647459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.394 [2024-11-20 10:44:42.647466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.394 [2024-11-20 10:44:42.647474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.394 [2024-11-20 10:44:42.647482] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.394 [2024-11-20 10:44:42.647487] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.394 [2024-11-20 10:44:42.647492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.394 [2024-11-20 10:44:42.656976] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.394 [2024-11-20 10:44:42.656993] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.395 [2024-11-20 10:44:42.656998] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.657003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.395 [2024-11-20 10:44:42.657021] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.657378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.395 [2024-11-20 10:44:42.657394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.395 [2024-11-20 10:44:42.657402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.395 [2024-11-20 10:44:42.657414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.395 [2024-11-20 10:44:42.657425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.395 [2024-11-20 10:44:42.657431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.395 [2024-11-20 10:44:42.657439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.395 [2024-11-20 10:44:42.657445] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.395 [2024-11-20 10:44:42.657450] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.395 [2024-11-20 10:44:42.657459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.395 [2024-11-20 10:44:42.667052] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.395 [2024-11-20 10:44:42.667066] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.395 [2024-11-20 10:44:42.667071] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.667075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.395 [2024-11-20 10:44:42.667091] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.667301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.395 [2024-11-20 10:44:42.667314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.395 [2024-11-20 10:44:42.667322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.395 [2024-11-20 10:44:42.667333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.395 [2024-11-20 10:44:42.667344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.395 [2024-11-20 10:44:42.667351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.395 [2024-11-20 10:44:42.667359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.395 [2024-11-20 10:44:42.667365] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.395 [2024-11-20 10:44:42.667370] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.395 [2024-11-20 10:44:42.667375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.395 [2024-11-20 10:44:42.677123] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.395 [2024-11-20 10:44:42.677138] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.395 [2024-11-20 10:44:42.677143] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.677147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.395 [2024-11-20 10:44:42.677166] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.395 [2024-11-20 10:44:42.677385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.395 [2024-11-20 10:44:42.677399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.395 [2024-11-20 10:44:42.677406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.395 [2024-11-20 10:44:42.677418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.395 [2024-11-20 10:44:42.677429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.395 [2024-11-20 10:44:42.677435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.395 [2024-11-20 10:44:42.677443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.395 [2024-11-20 10:44:42.677449] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.395 [2024-11-20 10:44:42.677454] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.395 [2024-11-20 10:44:42.677458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:10.395 [2024-11-20 10:44:42.687198] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.395 [2024-11-20 10:44:42.687214] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.395 [2024-11-20 10:44:42.687219] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.687224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.395 [2024-11-20 10:44:42.687240] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.687604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.395 [2024-11-20 10:44:42.687617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.395 [2024-11-20 10:44:42.687624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.395 [2024-11-20 10:44:42.687636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.395 [2024-11-20 10:44:42.687647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.395 [2024-11-20 10:44:42.687654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.395 [2024-11-20 10:44:42.687662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.395 [2024-11-20 10:44:42.687668] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.395 [2024-11-20 10:44:42.687673] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.395 [2024-11-20 10:44:42.687677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.395 [2024-11-20 10:44:42.697271] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:10.395 [2024-11-20 10:44:42.697283] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:10.395 [2024-11-20 10:44:42.697292] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.697297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.395 [2024-11-20 10:44:42.697311] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:10.395 [2024-11-20 10:44:42.697504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.395 [2024-11-20 10:44:42.697518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3e10 with addr=10.0.0.2, port=4420 00:27:10.395 [2024-11-20 10:44:42.697525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3e10 is same with the state(6) to be set 00:27:10.395 [2024-11-20 10:44:42.697536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3e10 (9): Bad file descriptor 00:27:10.395 [2024-11-20 10:44:42.697548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.395 [2024-11-20 10:44:42.697555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.395 [2024-11-20 10:44:42.697563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.395 [2024-11-20 10:44:42.697569] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:10.395 [2024-11-20 10:44:42.697574] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:10.395 [2024-11-20 10:44:42.697578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.395 [2024-11-20 10:44:42.698826] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:10.395 [2024-11-20 10:44:42.698844] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.395 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:10.396 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:10.657 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:10.658 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.658 10:44:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.040 [2024-11-20 10:44:44.046310] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:12.040 [2024-11-20 10:44:44.046331] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:12.040 [2024-11-20 10:44:44.046340] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:12.040 [2024-11-20 10:44:44.172693] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:12.040 [2024-11-20 10:44:44.238351] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:12.040 [2024-11-20 10:44:44.238968] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xdc4eb0:1 started. 00:27:12.040 [2024-11-20 10:44:44.240293] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:12.040 [2024-11-20 10:44:44.240314] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:12.040 [2024-11-20 10:44:44.244699] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xdc4eb0 was disconnected and freed. delete nvme_qpair. 00:27:12.040 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.041 request: 00:27:12.041 { 00:27:12.041 "name": "nvme", 00:27:12.041 "trtype": "tcp", 00:27:12.041 "traddr": "10.0.0.2", 00:27:12.041 "adrfam": "ipv4", 00:27:12.041 "trsvcid": "8009", 00:27:12.041 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:12.041 "wait_for_attach": true, 00:27:12.041 "method": "bdev_nvme_start_discovery", 00:27:12.041 "req_id": 1 00:27:12.041 } 00:27:12.041 Got JSON-RPC error response 00:27:12.041 response: 00:27:12.041 { 00:27:12.041 "code": -17, 00:27:12.041 "message": "File exists" 00:27:12.041 } 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.041 request: 00:27:12.041 { 00:27:12.041 "name": "nvme_second", 00:27:12.041 "trtype": "tcp", 00:27:12.041 "traddr": "10.0.0.2", 00:27:12.041 "adrfam": "ipv4", 00:27:12.041 "trsvcid": "8009", 00:27:12.041 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:12.041 "wait_for_attach": true, 00:27:12.041 "method": "bdev_nvme_start_discovery", 00:27:12.041 "req_id": 1 00:27:12.041 } 00:27:12.041 Got JSON-RPC error response 00:27:12.041 response: 00:27:12.041 { 00:27:12.041 "code": -17, 00:27:12.041 "message": "File exists" 00:27:12.041 } 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:12.041 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.301 10:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:13.244 [2024-11-20 10:44:45.489178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.244 [2024-11-20 10:44:45.489203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdcdc80 with addr=10.0.0.2, port=8010 00:27:13.244 [2024-11-20 10:44:45.489213] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:13.244 [2024-11-20 10:44:45.489219] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:13.244 [2024-11-20 10:44:45.489225] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:14.188 [2024-11-20 10:44:46.491356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 10:44:46.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdcdc80 with addr=10.0.0.2, port=8010 00:27:14.188 [2024-11-20 10:44:46.491385] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:14.188 [2024-11-20 10:44:46.491390] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:14.188 [2024-11-20 10:44:46.491395] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:15.130 [2024-11-20 10:44:47.493525] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:15.130 request: 00:27:15.130 { 00:27:15.130 "name": "nvme_second", 00:27:15.130 "trtype": "tcp", 00:27:15.130 "traddr": "10.0.0.2", 00:27:15.130 "adrfam": "ipv4", 00:27:15.130 "trsvcid": "8010", 00:27:15.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:15.130 "wait_for_attach": false, 00:27:15.130 "attach_timeout_ms": 3000, 00:27:15.130 "method": "bdev_nvme_start_discovery", 00:27:15.130 "req_id": 1 00:27:15.130 } 00:27:15.130 Got JSON-RPC error response 00:27:15.130 response: 00:27:15.130 { 00:27:15.130 "code": -110, 00:27:15.130 "message": "Connection timed out" 00:27:15.130 } 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:15.130 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2176609 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:15.389 rmmod nvme_tcp 00:27:15.389 rmmod nvme_fabrics 00:27:15.389 rmmod nvme_keyring 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2176324 ']' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2176324 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2176324 ']' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2176324 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176324 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176324' 00:27:15.389 killing process with pid 2176324 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2176324 00:27:15.389 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2176324 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.649 10:44:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.572 00:27:17.572 real 0m20.172s 00:27:17.572 user 0m23.302s 00:27:17.572 sys 0m7.194s 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:17.572 ************************************ 00:27:17.572 END TEST nvmf_host_discovery 00:27:17.572 ************************************ 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.572 10:44:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.834 ************************************ 00:27:17.834 START TEST nvmf_host_multipath_status 00:27:17.834 ************************************ 00:27:17.834 10:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:17.834 * Looking for test storage... 00:27:17.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:17.834 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.835 --rc genhtml_branch_coverage=1 00:27:17.835 --rc genhtml_function_coverage=1 00:27:17.835 --rc genhtml_legend=1 00:27:17.835 --rc geninfo_all_blocks=1 00:27:17.835 --rc geninfo_unexecuted_blocks=1 00:27:17.835 00:27:17.835 ' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.835 --rc genhtml_branch_coverage=1 00:27:17.835 --rc genhtml_function_coverage=1 00:27:17.835 --rc genhtml_legend=1 00:27:17.835 --rc geninfo_all_blocks=1 00:27:17.835 --rc geninfo_unexecuted_blocks=1 00:27:17.835 00:27:17.835 ' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.835 --rc genhtml_branch_coverage=1 00:27:17.835 --rc genhtml_function_coverage=1 00:27:17.835 --rc genhtml_legend=1 00:27:17.835 --rc geninfo_all_blocks=1 00:27:17.835 --rc geninfo_unexecuted_blocks=1 00:27:17.835 00:27:17.835 ' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.835 --rc genhtml_branch_coverage=1 00:27:17.835 --rc genhtml_function_coverage=1 00:27:17.835 --rc genhtml_legend=1 00:27:17.835 --rc geninfo_all_blocks=1 00:27:17.835 --rc geninfo_unexecuted_blocks=1 00:27:17.835 00:27:17.835 ' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.835 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.836 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.096 10:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:26.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:26.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:26.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:26.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.242 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:27:26.243 00:27:26.243 --- 10.0.0.2 ping statistics --- 00:27:26.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.243 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:26.243 00:27:26.243 --- 10.0.0.1 ping statistics --- 00:27:26.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.243 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2182786 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2182786 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2182786 ']' 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.243 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.243 [2024-11-20 10:44:57.750101] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:27:26.243 [2024-11-20 10:44:57.750174] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.243 [2024-11-20 10:44:57.848507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:26.243 [2024-11-20 10:44:57.899596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.243 [2024-11-20 10:44:57.899647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.243 [2024-11-20 10:44:57.899656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.243 [2024-11-20 10:44:57.899664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.243 [2024-11-20 10:44:57.899670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.243 [2024-11-20 10:44:57.901419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.243 [2024-11-20 10:44:57.901520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.243 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.243 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:26.243 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.243 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.243 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:26.504 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.504 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2182786 00:27:26.504 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:26.504 [2024-11-20 10:44:58.792735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.504 10:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:26.764 Malloc0 00:27:26.765 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:27.025 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.285 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.285 [2024-11-20 10:44:59.623401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.285 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:27.546 [2024-11-20 10:44:59.823976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2183150 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2183150 /var/tmp/bdevperf.sock 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2183150 ']' 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.546 10:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:28.488 10:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.488 10:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:28.488 10:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:28.749 10:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:29.009 Nvme0n1 00:27:29.010 10:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:29.270 Nvme0n1 00:27:29.531 10:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:29.531 10:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:31.443 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:31.443 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:31.703 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.703 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.088 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.348 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.348 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.348 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.348 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.609 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:33.870 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.870 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:33.870 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:34.131 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:34.131 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.515 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:35.775 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.775 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:35.776 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.776 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.054 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:36.351 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.352 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:36.352 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.352 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:36.352 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:36.648 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:36.648 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:37.605 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:37.605 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:37.605 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.605 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:37.865 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.865 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:37.865 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.865 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:38.126 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:38.126 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:38.126 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.126 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:38.386 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.387 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.646 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.646 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.646 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.646 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.907 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.907 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:38.907 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:38.907 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:39.168 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:40.106 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:40.106 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:40.106 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.106 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.367 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.367 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:40.367 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.367 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.627 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.627 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.627 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.627 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.888 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:41.149 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.149 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:41.149 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.149 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:41.412 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:41.412 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:41.412 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:41.412 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:41.673 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:42.614 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:42.614 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:42.614 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.614 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.876 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:42.876 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:42.876 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.876 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.136 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.395 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.395 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:43.395 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.396 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.655 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.656 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:43.656 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.656 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:43.916 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.916 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:43.916 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:43.916 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:44.177 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:45.116 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:45.116 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:45.116 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.116 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.376 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:45.376 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:45.376 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.376 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.636 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:45.896 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.896 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:45.896 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.896 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:46.157 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.157 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:46.157 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.157 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:46.418 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.418 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:46.418 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:46.418 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:46.680 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:46.940 10:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:47.881 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:47.881 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:47.881 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.881 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:47.881 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.142 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.402 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.403 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.403 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.403 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.663 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.663 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:48.663 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.663 10:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.663 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.663 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:48.663 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.663 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.923 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.923 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:48.923 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.184 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:49.445 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:50.387 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:50.387 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.387 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.387 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.647 10:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.908 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.908 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.908 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.908 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.169 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.429 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.429 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:51.429 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:51.689 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:51.689 10:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.073 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.334 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.334 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.334 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.334 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.595 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.595 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:53.595 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.595 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.856 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.856 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.856 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.856 10:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.856 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.856 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:53.856 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:54.117 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:54.377 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:55.319 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:55.319 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:55.319 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.319 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.580 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.841 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.841 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.841 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.841 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.101 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2183150 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2183150 ']' 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2183150 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183150 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183150' 00:27:56.361 killing process with pid 2183150 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2183150 00:27:56.361 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2183150 00:27:56.642 { 00:27:56.642 "results": [ 00:27:56.642 { 00:27:56.643 "job": "Nvme0n1", 00:27:56.643 "core_mask": "0x4", 00:27:56.643 "workload": "verify", 00:27:56.643 "status": "terminated", 00:27:56.643 "verify_range": { 00:27:56.643 "start": 0, 00:27:56.643 "length": 16384 00:27:56.643 }, 00:27:56.643 "queue_depth": 128, 00:27:56.643 "io_size": 4096, 00:27:56.643 "runtime": 26.938634, 00:27:56.643 "iops": 11978.11292139015, 00:27:56.643 "mibps": 46.78950359918027, 00:27:56.643 "io_failed": 0, 00:27:56.643 "io_timeout": 0, 00:27:56.643 "avg_latency_us": 10667.932489137644, 00:27:56.643 "min_latency_us": 447.14666666666665, 00:27:56.643 "max_latency_us": 3075822.933333333 00:27:56.643 } 00:27:56.643 ], 00:27:56.643 "core_count": 1 00:27:56.643 } 00:27:56.643 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2183150 00:27:56.643 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:56.643 [2024-11-20 10:44:59.903482] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:27:56.643 [2024-11-20 10:44:59.903560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183150 ] 00:27:56.643 [2024-11-20 10:44:59.998078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.643 [2024-11-20 10:45:00.054408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.643 Running I/O for 90 seconds... 00:27:56.643 10529.00 IOPS, 41.13 MiB/s [2024-11-20T09:45:29.019Z] 11026.00 IOPS, 43.07 MiB/s [2024-11-20T09:45:29.019Z] 11118.33 IOPS, 43.43 MiB/s [2024-11-20T09:45:29.019Z] 11405.50 IOPS, 44.55 MiB/s [2024-11-20T09:45:29.019Z] 11698.60 IOPS, 45.70 MiB/s [2024-11-20T09:45:29.019Z] 11917.33 IOPS, 46.55 MiB/s [2024-11-20T09:45:29.019Z] 12053.14 IOPS, 47.08 MiB/s [2024-11-20T09:45:29.019Z] 12167.00 IOPS, 47.53 MiB/s [2024-11-20T09:45:29.019Z] 12229.67 IOPS, 47.77 MiB/s [2024-11-20T09:45:29.019Z] 12293.30 IOPS, 48.02 MiB/s [2024-11-20T09:45:29.019Z] 12345.09 IOPS, 48.22 MiB/s [2024-11-20T09:45:29.019Z] [2024-11-20 10:45:13.720005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.643 [2024-11-20 10:45:13.720041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.643 [2024-11-20 10:45:13.720065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.643 [2024-11-20 10:45:13.720447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.643 [2024-11-20 10:45:13.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.644 [2024-11-20 10:45:13.720593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.644 [2024-11-20 10:45:13.720879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.720984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.720989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.644 [2024-11-20 10:45:13.721342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.644 [2024-11-20 10:45:13.721352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.721979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.721994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.645 [2024-11-20 10:45:13.722377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.645 [2024-11-20 10:45:13.722387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.722994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.722999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.646 [2024-11-20 10:45:13.723048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.646 [2024-11-20 10:45:13.723063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.646 [2024-11-20 10:45:13.723212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.646 [2024-11-20 10:45:13.723222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.647 [2024-11-20 10:45:13.723767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.647 [2024-11-20 10:45:13.723785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.723857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.723862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.734686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.734709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.734721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.734750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.734756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.647 [2024-11-20 10:45:13.734767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.647 [2024-11-20 10:45:13.734773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.734784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.734790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.734802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.734808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.648 [2024-11-20 10:45:13.735706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.648 [2024-11-20 10:45:13.735711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.735991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.649 [2024-11-20 10:45:13.736353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.649 [2024-11-20 10:45:13.736364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.650 [2024-11-20 10:45:13.736491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.650 [2024-11-20 10:45:13.736508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.736790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.736796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.650 [2024-11-20 10:45:13.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.650 [2024-11-20 10:45:13.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.651 [2024-11-20 10:45:13.737844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.651 [2024-11-20 10:45:13.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.737992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.737998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.738996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.651 [2024-11-20 10:45:13.739095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.651 [2024-11-20 10:45:13.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.739111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.739118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.739129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.739134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.739146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.739152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.739167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.746984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.746992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.652 [2024-11-20 10:45:13.747320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.652 [2024-11-20 10:45:13.747328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.653 [2024-11-20 10:45:13.747906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.653 [2024-11-20 10:45:13.747929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.747998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.653 [2024-11-20 10:45:13.748226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.653 [2024-11-20 10:45:13.748233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.654 [2024-11-20 10:45:13.748672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.654 [2024-11-20 10:45:13.748695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.748896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.748906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.749914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.749930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.749948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.749955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.749971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.749978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.749993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.654 [2024-11-20 10:45:13.750117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.654 [2024-11-20 10:45:13.750132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.750721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.750728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.655 [2024-11-20 10:45:13.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.655 [2024-11-20 10:45:13.751479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.751984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.751999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.656 [2024-11-20 10:45:13.752264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.656 [2024-11-20 10:45:13.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.657 [2024-11-20 10:45:13.752332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.657 [2024-11-20 10:45:13.752358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.752664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.657 [2024-11-20 10:45:13.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.657 [2024-11-20 10:45:13.753842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.657 [2024-11-20 10:45:13.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.657 [2024-11-20 10:45:13.753865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.753888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.753912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.753936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.753959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.753983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.753998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.658 [2024-11-20 10:45:13.754869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.658 [2024-11-20 10:45:13.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.754897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.754906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.754924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.754933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.755986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.659 [2024-11-20 10:45:13.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.659 [2024-11-20 10:45:13.756695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.756973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.756983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.660 [2024-11-20 10:45:13.757087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.660 [2024-11-20 10:45:13.757115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.757437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.660 [2024-11-20 10:45:13.758519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.660 [2024-11-20 10:45:13.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.661 [2024-11-20 10:45:13.758782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.661 [2024-11-20 10:45:13.758809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.758986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.758995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.661 [2024-11-20 10:45:13.759424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.661 [2024-11-20 10:45:13.759443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.759895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.759904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.760984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.662 [2024-11-20 10:45:13.761227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.662 [2024-11-20 10:45:13.761246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.761984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.663 [2024-11-20 10:45:13.762071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.663 [2024-11-20 10:45:13.762099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.663 [2024-11-20 10:45:13.762253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.663 [2024-11-20 10:45:13.762262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.762280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.762288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.762306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.762332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.762341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.762358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.762384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.762393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.764985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.764997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.664 [2024-11-20 10:45:13.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.664 [2024-11-20 10:45:13.765219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.664 [2024-11-20 10:45:13.765441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.664 [2024-11-20 10:45:13.765447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.765968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.765974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.665 12361.92 IOPS, 48.29 MiB/s [2024-11-20T09:45:29.041Z] [2024-11-20 10:45:13.766527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.766538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.766552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.766561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.766576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.766582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.766595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.766601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.766614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.665 [2024-11-20 10:45:13.766620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.665 [2024-11-20 10:45:13.766633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.766983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.766996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.666 [2024-11-20 10:45:13.767367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.666 [2024-11-20 10:45:13.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.667 [2024-11-20 10:45:13.767552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.667 [2024-11-20 10:45:13.767571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.767754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.767760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.667 [2024-11-20 10:45:13.768654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.667 [2024-11-20 10:45:13.768666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.668 [2024-11-20 10:45:13.768747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.668 [2024-11-20 10:45:13.768767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.768986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.768993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.668 [2024-11-20 10:45:13.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.668 [2024-11-20 10:45:13.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.769509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.769516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.669 [2024-11-20 10:45:13.770689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.669 [2024-11-20 10:45:13.770695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.770989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.770995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.670 [2024-11-20 10:45:13.771105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.670 [2024-11-20 10:45:13.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.670 [2024-11-20 10:45:13.771868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.670 [2024-11-20 10:45:13.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.771986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.771993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.671 [2024-11-20 10:45:13.772303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.671 [2024-11-20 10:45:13.772322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.671 [2024-11-20 10:45:13.772616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.671 [2024-11-20 10:45:13.772623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.772937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.772944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.776805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.776826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.776838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.776844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.776855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.776860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.776871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.776877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.776889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.776894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.672 [2024-11-20 10:45:13.777656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:56.672 [2024-11-20 10:45:13.777667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.777987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.673 [2024-11-20 10:45:13.778309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:56.673 [2024-11-20 10:45:13.778319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.674 [2024-11-20 10:45:13.778327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.674 [2024-11-20 10:45:13.778345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.778984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.778998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.779004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.779017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.779024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.779037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.779056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.779062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.779075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.674 [2024-11-20 10:45:13.779081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.674 [2024-11-20 10:45:13.779095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.674 [2024-11-20 10:45:13.779101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.675 [2024-11-20 10:45:13.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.675 [2024-11-20 10:45:13.779966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:56.675 [2024-11-20 10:45:13.779982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:13.779988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:56.676 11411.00 IOPS, 44.57 MiB/s [2024-11-20T09:45:29.052Z] 10595.93 IOPS, 41.39 MiB/s [2024-11-20T09:45:29.052Z] 9889.53 IOPS, 38.63 MiB/s [2024-11-20T09:45:29.052Z] 10079.75 IOPS, 39.37 MiB/s [2024-11-20T09:45:29.052Z] 10257.94 IOPS, 40.07 MiB/s [2024-11-20T09:45:29.052Z] 10593.44 IOPS, 41.38 MiB/s [2024-11-20T09:45:29.052Z] 10932.11 IOPS, 42.70 MiB/s [2024-11-20T09:45:29.052Z] 11152.90 IOPS, 43.57 MiB/s [2024-11-20T09:45:29.052Z] 11234.76 IOPS, 43.89 MiB/s [2024-11-20T09:45:29.052Z] 11319.64 IOPS, 44.22 MiB/s [2024-11-20T09:45:29.052Z] 11516.17 IOPS, 44.99 MiB/s [2024-11-20T09:45:29.052Z] 11736.12 IOPS, 45.84 MiB/s [2024-11-20T09:45:29.052Z] [2024-11-20 10:45:26.509355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.509870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.509931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.509936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.510934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.676 [2024-11-20 10:45:26.510948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:56.676 [2024-11-20 10:45:26.511224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.676 [2024-11-20 10:45:26.511237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:56.676 11900.00 IOPS, 46.48 MiB/s [2024-11-20T09:45:29.052Z] 11929.69 IOPS, 46.60 MiB/s [2024-11-20T09:45:29.052Z] Received shutdown signal, test time was about 26.939327 seconds 00:27:56.676 00:27:56.676 Latency(us) 00:27:56.676 [2024-11-20T09:45:29.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.676 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:56.676 Verification LBA range: start 0x0 length 0x4000 00:27:56.676 Nvme0n1 : 26.94 11978.11 46.79 0.00 0.00 10667.93 447.15 3075822.93 00:27:56.676 [2024-11-20T09:45:29.052Z] =================================================================================================================== 00:27:56.676 [2024-11-20T09:45:29.052Z] Total : 11978.11 46.79 0.00 0.00 10667.93 447.15 3075822.93 00:27:56.676 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.951 rmmod nvme_tcp 00:27:56.951 rmmod nvme_fabrics 00:27:56.951 rmmod nvme_keyring 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2182786 ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2182786 ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182786' 00:27:56.951 killing process with pid 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2182786 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.951 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.497 00:27:59.497 real 0m41.395s 00:27:59.497 user 1m47.065s 00:27:59.497 sys 0m11.780s 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:59.497 ************************************ 00:27:59.497 END TEST nvmf_host_multipath_status 00:27:59.497 ************************************ 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.497 ************************************ 00:27:59.497 START TEST nvmf_discovery_remove_ifc 00:27:59.497 ************************************ 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:59.497 * Looking for test storage... 00:27:59.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.497 --rc genhtml_branch_coverage=1 00:27:59.497 --rc genhtml_function_coverage=1 00:27:59.497 --rc genhtml_legend=1 00:27:59.497 --rc geninfo_all_blocks=1 00:27:59.497 --rc geninfo_unexecuted_blocks=1 00:27:59.497 00:27:59.497 ' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.497 --rc genhtml_branch_coverage=1 00:27:59.497 --rc genhtml_function_coverage=1 00:27:59.497 --rc genhtml_legend=1 00:27:59.497 --rc geninfo_all_blocks=1 00:27:59.497 --rc geninfo_unexecuted_blocks=1 00:27:59.497 00:27:59.497 ' 00:27:59.497 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:59.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.497 --rc genhtml_branch_coverage=1 00:27:59.498 --rc genhtml_function_coverage=1 00:27:59.498 --rc genhtml_legend=1 00:27:59.498 --rc geninfo_all_blocks=1 00:27:59.498 --rc geninfo_unexecuted_blocks=1 00:27:59.498 00:27:59.498 ' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:59.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.498 --rc genhtml_branch_coverage=1 00:27:59.498 --rc genhtml_function_coverage=1 00:27:59.498 --rc genhtml_legend=1 00:27:59.498 --rc geninfo_all_blocks=1 00:27:59.498 --rc geninfo_unexecuted_blocks=1 00:27:59.498 00:27:59.498 ' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:59.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.498 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.660 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.661 10:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:28:07.661 00:28:07.661 --- 10.0.0.2 ping statistics --- 00:28:07.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.661 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:07.661 00:28:07.661 --- 10.0.0.1 ping statistics --- 00:28:07.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.661 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2193607 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2193607 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2193607 ']' 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.661 10:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.661 [2024-11-20 10:45:39.231940] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:28:07.661 [2024-11-20 10:45:39.232006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.661 [2024-11-20 10:45:39.330459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.661 [2024-11-20 10:45:39.381125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.661 [2024-11-20 10:45:39.381185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.661 [2024-11-20 10:45:39.381194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.661 [2024-11-20 10:45:39.381201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.661 [2024-11-20 10:45:39.381207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.661 [2024-11-20 10:45:39.381968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.922 [2024-11-20 10:45:40.103727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.922 [2024-11-20 10:45:40.111991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:07.922 null0 00:28:07.922 [2024-11-20 10:45:40.143944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2193953 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2193953 /tmp/host.sock 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2193953 ']' 00:28:07.922 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:07.923 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.923 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:07.923 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:07.923 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.923 10:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.923 [2024-11-20 10:45:40.220314] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:28:07.923 [2024-11-20 10:45:40.220379] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193953 ] 00:28:08.184 [2024-11-20 10:45:40.311811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.184 [2024-11-20 10:45:40.365107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:08.756 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.018 10:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.959 [2024-11-20 10:45:42.142522] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:09.959 [2024-11-20 10:45:42.142553] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:09.959 [2024-11-20 10:45:42.142568] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:09.959 [2024-11-20 10:45:42.230844] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:10.219 [2024-11-20 10:45:42.456498] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:10.219 [2024-11-20 10:45:42.457579] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22d83f0:1 started. 00:28:10.219 [2024-11-20 10:45:42.459114] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:10.219 [2024-11-20 10:45:42.459166] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:10.219 [2024-11-20 10:45:42.459189] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:10.219 [2024-11-20 10:45:42.459203] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:10.219 [2024-11-20 10:45:42.459224] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:10.219 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.219 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:10.219 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:10.219 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.220 [2024-11-20 10:45:42.501903] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22d83f0 was disconnected and freed. delete nvme_qpair. 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:10.220 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:10.481 10:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:11.421 10:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:12.802 10:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:13.742 10:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:14.684 10:45:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:15.624 [2024-11-20 10:45:47.899714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:15.624 [2024-11-20 10:45:47.899748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.624 [2024-11-20 10:45:47.899758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.624 [2024-11-20 10:45:47.899765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.624 [2024-11-20 10:45:47.899771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.624 [2024-11-20 10:45:47.899777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.624 [2024-11-20 10:45:47.899783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.624 [2024-11-20 10:45:47.899788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.624 [2024-11-20 10:45:47.899793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.624 [2024-11-20 10:45:47.899799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.624 [2024-11-20 10:45:47.899804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.624 [2024-11-20 10:45:47.899810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b4c00 is same with the state(6) to be set 00:28:15.624 [2024-11-20 10:45:47.909735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b4c00 (9): Bad file descriptor 00:28:15.624 [2024-11-20 10:45:47.919768] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.624 [2024-11-20 10:45:47.919777] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.624 [2024-11-20 10:45:47.919781] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.624 [2024-11-20 10:45:47.919784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.625 [2024-11-20 10:45:47.919800] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.625 10:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.007 [2024-11-20 10:45:48.955216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:17.007 [2024-11-20 10:45:48.955307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b4c00 with addr=10.0.0.2, port=4420 00:28:17.007 [2024-11-20 10:45:48.955339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b4c00 is same with the state(6) to be set 00:28:17.007 [2024-11-20 10:45:48.955393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b4c00 (9): Bad file descriptor 00:28:17.007 [2024-11-20 10:45:48.956513] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:17.007 [2024-11-20 10:45:48.956582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:17.007 [2024-11-20 10:45:48.956605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:17.007 [2024-11-20 10:45:48.956629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:17.007 [2024-11-20 10:45:48.956649] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:17.007 [2024-11-20 10:45:48.956665] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:17.007 [2024-11-20 10:45:48.956679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:17.007 [2024-11-20 10:45:48.956701] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:17.007 [2024-11-20 10:45:48.956716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:17.007 10:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.007 10:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:17.007 10:45:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.949 [2024-11-20 10:45:49.959137] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:17.949 [2024-11-20 10:45:49.959152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:17.949 [2024-11-20 10:45:49.959164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:17.949 [2024-11-20 10:45:49.959169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:17.949 [2024-11-20 10:45:49.959175] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:17.949 [2024-11-20 10:45:49.959181] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:17.949 [2024-11-20 10:45:49.959184] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:17.949 [2024-11-20 10:45:49.959187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:17.949 [2024-11-20 10:45:49.959202] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:17.949 [2024-11-20 10:45:49.959218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.949 [2024-11-20 10:45:49.959229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.949 [2024-11-20 10:45:49.959236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.949 [2024-11-20 10:45:49.959241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.949 [2024-11-20 10:45:49.959247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.949 [2024-11-20 10:45:49.959252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.949 [2024-11-20 10:45:49.959258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.949 [2024-11-20 10:45:49.959263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.949 [2024-11-20 10:45:49.959270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.949 [2024-11-20 10:45:49.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.949 [2024-11-20 10:45:49.959280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:17.949 [2024-11-20 10:45:49.959658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a4340 (9): Bad file descriptor 00:28:17.949 [2024-11-20 10:45:49.960668] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:17.949 [2024-11-20 10:45:49.960678] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.949 10:45:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:17.949 10:45:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:18.892 10:45:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.833 [2024-11-20 10:45:52.017072] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:19.833 [2024-11-20 10:45:52.017086] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:19.833 [2024-11-20 10:45:52.017096] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:19.833 [2024-11-20 10:45:52.145475] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.093 [2024-11-20 10:45:52.246068] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:20.093 [2024-11-20 10:45:52.246698] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x22b6f20:1 started. 00:28:20.093 [2024-11-20 10:45:52.247585] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:20.093 [2024-11-20 10:45:52.247613] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:20.093 [2024-11-20 10:45:52.247628] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:20.093 [2024-11-20 10:45:52.247638] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:20.093 [2024-11-20 10:45:52.247644] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.093 [2024-11-20 10:45:52.255768] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x22b6f20 was disconnected and freed. delete nvme_qpair. 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2193953 ']' 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193953' 00:28:20.093 killing process with pid 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2193953 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.093 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.353 rmmod nvme_tcp 00:28:20.353 rmmod nvme_fabrics 00:28:20.353 rmmod nvme_keyring 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2193607 ']' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2193607 ']' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193607' 00:28:20.353 killing process with pid 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2193607 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.353 10:45:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.968 00:28:22.968 real 0m23.342s 00:28:22.968 user 0m27.257s 00:28:22.968 sys 0m7.158s 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:22.968 ************************************ 00:28:22.968 END TEST nvmf_discovery_remove_ifc 00:28:22.968 ************************************ 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.968 ************************************ 00:28:22.968 START TEST nvmf_identify_kernel_target 00:28:22.968 ************************************ 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:22.968 * Looking for test storage... 00:28:22.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:22.968 10:45:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.968 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:22.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.968 --rc genhtml_branch_coverage=1 00:28:22.968 --rc genhtml_function_coverage=1 00:28:22.968 --rc genhtml_legend=1 00:28:22.968 --rc geninfo_all_blocks=1 00:28:22.968 --rc geninfo_unexecuted_blocks=1 00:28:22.969 00:28:22.969 ' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.969 --rc genhtml_branch_coverage=1 00:28:22.969 --rc genhtml_function_coverage=1 00:28:22.969 --rc genhtml_legend=1 00:28:22.969 --rc geninfo_all_blocks=1 00:28:22.969 --rc geninfo_unexecuted_blocks=1 00:28:22.969 00:28:22.969 ' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.969 --rc genhtml_branch_coverage=1 00:28:22.969 --rc genhtml_function_coverage=1 00:28:22.969 --rc genhtml_legend=1 00:28:22.969 --rc geninfo_all_blocks=1 00:28:22.969 --rc geninfo_unexecuted_blocks=1 00:28:22.969 00:28:22.969 ' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.969 --rc genhtml_branch_coverage=1 00:28:22.969 --rc genhtml_function_coverage=1 00:28:22.969 --rc genhtml_legend=1 00:28:22.969 --rc geninfo_all_blocks=1 00:28:22.969 --rc geninfo_unexecuted_blocks=1 00:28:22.969 00:28:22.969 ' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:22.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.969 10:45:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:31.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:31.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:31.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:31.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.129 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:28:31.130 00:28:31.130 --- 10.0.0.2 ping statistics --- 00:28:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.130 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:28:31.130 00:28:31.130 --- 10.0.0.1 ping statistics --- 00:28:31.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.130 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:31.130 10:46:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:33.678 Waiting for block devices as requested 00:28:33.678 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:33.678 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:33.939 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:33.939 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:33.939 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:34.200 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:34.200 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:34.200 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:34.200 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:34.461 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:34.722 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:34.722 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:34.722 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:34.722 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:34.983 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:34.983 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:34.983 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:35.567 No valid GPT data, bailing 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:35.567 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:35.568 00:28:35.568 Discovery Log Number of Records 2, Generation counter 2 00:28:35.568 =====Discovery Log Entry 0====== 00:28:35.568 trtype: tcp 00:28:35.568 adrfam: ipv4 00:28:35.568 subtype: current discovery subsystem 00:28:35.568 treq: not specified, sq flow control disable supported 00:28:35.568 portid: 1 00:28:35.568 trsvcid: 4420 00:28:35.568 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:35.568 traddr: 10.0.0.1 00:28:35.568 eflags: none 00:28:35.568 sectype: none 00:28:35.568 =====Discovery Log Entry 1====== 00:28:35.568 trtype: tcp 00:28:35.568 adrfam: ipv4 00:28:35.568 subtype: nvme subsystem 00:28:35.568 treq: not specified, sq flow control disable supported 00:28:35.568 portid: 1 00:28:35.568 trsvcid: 4420 00:28:35.568 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:35.568 traddr: 10.0.0.1 00:28:35.568 eflags: none 00:28:35.568 sectype: none 00:28:35.568 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:35.568 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:35.568 ===================================================== 00:28:35.568 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:35.568 ===================================================== 00:28:35.568 Controller Capabilities/Features 00:28:35.568 ================================ 00:28:35.568 Vendor ID: 0000 00:28:35.568 Subsystem Vendor ID: 0000 00:28:35.568 Serial Number: c4cb662aa0731569290a 00:28:35.568 Model Number: Linux 00:28:35.568 Firmware Version: 6.8.9-20 00:28:35.568 Recommended Arb Burst: 0 00:28:35.568 IEEE OUI Identifier: 00 00 00 00:28:35.568 Multi-path I/O 00:28:35.568 May have multiple subsystem ports: No 00:28:35.568 May have multiple controllers: No 00:28:35.568 Associated with SR-IOV VF: No 00:28:35.568 Max Data Transfer Size: Unlimited 00:28:35.568 Max Number of Namespaces: 0 00:28:35.568 Max Number of I/O Queues: 1024 00:28:35.568 NVMe Specification Version (VS): 1.3 00:28:35.568 NVMe Specification Version (Identify): 1.3 00:28:35.568 Maximum Queue Entries: 1024 00:28:35.568 Contiguous Queues Required: No 00:28:35.568 Arbitration Mechanisms Supported 00:28:35.568 Weighted Round Robin: Not Supported 00:28:35.568 Vendor Specific: Not Supported 00:28:35.568 Reset Timeout: 7500 ms 00:28:35.568 Doorbell Stride: 4 bytes 00:28:35.568 NVM Subsystem Reset: Not Supported 00:28:35.568 Command Sets Supported 00:28:35.568 NVM Command Set: Supported 00:28:35.568 Boot Partition: Not Supported 00:28:35.568 Memory Page Size Minimum: 4096 bytes 00:28:35.568 Memory Page Size Maximum: 4096 bytes 00:28:35.568 Persistent Memory Region: Not Supported 00:28:35.568 Optional Asynchronous Events Supported 00:28:35.568 Namespace Attribute Notices: Not Supported 00:28:35.568 Firmware Activation Notices: Not Supported 00:28:35.568 ANA Change Notices: Not Supported 00:28:35.568 PLE Aggregate Log Change Notices: Not Supported 00:28:35.568 LBA Status Info Alert Notices: Not Supported 00:28:35.568 EGE Aggregate Log Change Notices: Not Supported 00:28:35.568 Normal NVM Subsystem Shutdown event: Not Supported 00:28:35.568 Zone Descriptor Change Notices: Not Supported 00:28:35.568 Discovery Log Change Notices: Supported 00:28:35.568 Controller Attributes 00:28:35.568 128-bit Host Identifier: Not Supported 00:28:35.568 Non-Operational Permissive Mode: Not Supported 00:28:35.568 NVM Sets: Not Supported 00:28:35.568 Read Recovery Levels: Not Supported 00:28:35.568 Endurance Groups: Not Supported 00:28:35.568 Predictable Latency Mode: Not Supported 00:28:35.568 Traffic Based Keep ALive: Not Supported 00:28:35.568 Namespace Granularity: Not Supported 00:28:35.568 SQ Associations: Not Supported 00:28:35.568 UUID List: Not Supported 00:28:35.568 Multi-Domain Subsystem: Not Supported 00:28:35.568 Fixed Capacity Management: Not Supported 00:28:35.568 Variable Capacity Management: Not Supported 00:28:35.568 Delete Endurance Group: Not Supported 00:28:35.568 Delete NVM Set: Not Supported 00:28:35.568 Extended LBA Formats Supported: Not Supported 00:28:35.568 Flexible Data Placement Supported: Not Supported 00:28:35.568 00:28:35.568 Controller Memory Buffer Support 00:28:35.568 ================================ 00:28:35.568 Supported: No 00:28:35.568 00:28:35.568 Persistent Memory Region Support 00:28:35.568 ================================ 00:28:35.568 Supported: No 00:28:35.568 00:28:35.568 Admin Command Set Attributes 00:28:35.568 ============================ 00:28:35.568 Security Send/Receive: Not Supported 00:28:35.568 Format NVM: Not Supported 00:28:35.568 Firmware Activate/Download: Not Supported 00:28:35.568 Namespace Management: Not Supported 00:28:35.568 Device Self-Test: Not Supported 00:28:35.568 Directives: Not Supported 00:28:35.568 NVMe-MI: Not Supported 00:28:35.568 Virtualization Management: Not Supported 00:28:35.568 Doorbell Buffer Config: Not Supported 00:28:35.568 Get LBA Status Capability: Not Supported 00:28:35.568 Command & Feature Lockdown Capability: Not Supported 00:28:35.568 Abort Command Limit: 1 00:28:35.568 Async Event Request Limit: 1 00:28:35.568 Number of Firmware Slots: N/A 00:28:35.568 Firmware Slot 1 Read-Only: N/A 00:28:35.568 Firmware Activation Without Reset: N/A 00:28:35.568 Multiple Update Detection Support: N/A 00:28:35.568 Firmware Update Granularity: No Information Provided 00:28:35.568 Per-Namespace SMART Log: No 00:28:35.568 Asymmetric Namespace Access Log Page: Not Supported 00:28:35.568 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:35.568 Command Effects Log Page: Not Supported 00:28:35.568 Get Log Page Extended Data: Supported 00:28:35.568 Telemetry Log Pages: Not Supported 00:28:35.568 Persistent Event Log Pages: Not Supported 00:28:35.568 Supported Log Pages Log Page: May Support 00:28:35.568 Commands Supported & Effects Log Page: Not Supported 00:28:35.568 Feature Identifiers & Effects Log Page:May Support 00:28:35.568 NVMe-MI Commands & Effects Log Page: May Support 00:28:35.568 Data Area 4 for Telemetry Log: Not Supported 00:28:35.568 Error Log Page Entries Supported: 1 00:28:35.568 Keep Alive: Not Supported 00:28:35.568 00:28:35.568 NVM Command Set Attributes 00:28:35.568 ========================== 00:28:35.568 Submission Queue Entry Size 00:28:35.568 Max: 1 00:28:35.568 Min: 1 00:28:35.568 Completion Queue Entry Size 00:28:35.568 Max: 1 00:28:35.568 Min: 1 00:28:35.568 Number of Namespaces: 0 00:28:35.568 Compare Command: Not Supported 00:28:35.568 Write Uncorrectable Command: Not Supported 00:28:35.568 Dataset Management Command: Not Supported 00:28:35.568 Write Zeroes Command: Not Supported 00:28:35.568 Set Features Save Field: Not Supported 00:28:35.568 Reservations: Not Supported 00:28:35.568 Timestamp: Not Supported 00:28:35.568 Copy: Not Supported 00:28:35.568 Volatile Write Cache: Not Present 00:28:35.568 Atomic Write Unit (Normal): 1 00:28:35.568 Atomic Write Unit (PFail): 1 00:28:35.568 Atomic Compare & Write Unit: 1 00:28:35.568 Fused Compare & Write: Not Supported 00:28:35.568 Scatter-Gather List 00:28:35.568 SGL Command Set: Supported 00:28:35.568 SGL Keyed: Not Supported 00:28:35.568 SGL Bit Bucket Descriptor: Not Supported 00:28:35.568 SGL Metadata Pointer: Not Supported 00:28:35.568 Oversized SGL: Not Supported 00:28:35.568 SGL Metadata Address: Not Supported 00:28:35.568 SGL Offset: Supported 00:28:35.568 Transport SGL Data Block: Not Supported 00:28:35.568 Replay Protected Memory Block: Not Supported 00:28:35.568 00:28:35.568 Firmware Slot Information 00:28:35.568 ========================= 00:28:35.568 Active slot: 0 00:28:35.568 00:28:35.568 00:28:35.568 Error Log 00:28:35.568 ========= 00:28:35.568 00:28:35.568 Active Namespaces 00:28:35.568 ================= 00:28:35.568 Discovery Log Page 00:28:35.568 ================== 00:28:35.568 Generation Counter: 2 00:28:35.568 Number of Records: 2 00:28:35.568 Record Format: 0 00:28:35.568 00:28:35.568 Discovery Log Entry 0 00:28:35.568 ---------------------- 00:28:35.569 Transport Type: 3 (TCP) 00:28:35.569 Address Family: 1 (IPv4) 00:28:35.569 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:35.569 Entry Flags: 00:28:35.569 Duplicate Returned Information: 0 00:28:35.569 Explicit Persistent Connection Support for Discovery: 0 00:28:35.569 Transport Requirements: 00:28:35.569 Secure Channel: Not Specified 00:28:35.569 Port ID: 1 (0x0001) 00:28:35.569 Controller ID: 65535 (0xffff) 00:28:35.569 Admin Max SQ Size: 32 00:28:35.569 Transport Service Identifier: 4420 00:28:35.569 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:35.569 Transport Address: 10.0.0.1 00:28:35.569 Discovery Log Entry 1 00:28:35.569 ---------------------- 00:28:35.569 Transport Type: 3 (TCP) 00:28:35.569 Address Family: 1 (IPv4) 00:28:35.569 Subsystem Type: 2 (NVM Subsystem) 00:28:35.569 Entry Flags: 00:28:35.569 Duplicate Returned Information: 0 00:28:35.569 Explicit Persistent Connection Support for Discovery: 0 00:28:35.569 Transport Requirements: 00:28:35.569 Secure Channel: Not Specified 00:28:35.569 Port ID: 1 (0x0001) 00:28:35.569 Controller ID: 65535 (0xffff) 00:28:35.569 Admin Max SQ Size: 32 00:28:35.569 Transport Service Identifier: 4420 00:28:35.569 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:35.569 Transport Address: 10.0.0.1 00:28:35.569 10:46:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.830 get_feature(0x01) failed 00:28:35.830 get_feature(0x02) failed 00:28:35.830 get_feature(0x04) failed 00:28:35.830 ===================================================== 00:28:35.830 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.830 ===================================================== 00:28:35.830 Controller Capabilities/Features 00:28:35.830 ================================ 00:28:35.830 Vendor ID: 0000 00:28:35.830 Subsystem Vendor ID: 0000 00:28:35.830 Serial Number: c528d5f993c4eff8afbf 00:28:35.830 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:35.830 Firmware Version: 6.8.9-20 00:28:35.830 Recommended Arb Burst: 6 00:28:35.830 IEEE OUI Identifier: 00 00 00 00:28:35.830 Multi-path I/O 00:28:35.830 May have multiple subsystem ports: Yes 00:28:35.830 May have multiple controllers: Yes 00:28:35.830 Associated with SR-IOV VF: No 00:28:35.830 Max Data Transfer Size: Unlimited 00:28:35.830 Max Number of Namespaces: 1024 00:28:35.830 Max Number of I/O Queues: 128 00:28:35.830 NVMe Specification Version (VS): 1.3 00:28:35.830 NVMe Specification Version (Identify): 1.3 00:28:35.830 Maximum Queue Entries: 1024 00:28:35.830 Contiguous Queues Required: No 00:28:35.830 Arbitration Mechanisms Supported 00:28:35.830 Weighted Round Robin: Not Supported 00:28:35.830 Vendor Specific: Not Supported 00:28:35.830 Reset Timeout: 7500 ms 00:28:35.830 Doorbell Stride: 4 bytes 00:28:35.830 NVM Subsystem Reset: Not Supported 00:28:35.830 Command Sets Supported 00:28:35.830 NVM Command Set: Supported 00:28:35.830 Boot Partition: Not Supported 00:28:35.830 Memory Page Size Minimum: 4096 bytes 00:28:35.830 Memory Page Size Maximum: 4096 bytes 00:28:35.830 Persistent Memory Region: Not Supported 00:28:35.830 Optional Asynchronous Events Supported 00:28:35.830 Namespace Attribute Notices: Supported 00:28:35.830 Firmware Activation Notices: Not Supported 00:28:35.830 ANA Change Notices: Supported 00:28:35.830 PLE Aggregate Log Change Notices: Not Supported 00:28:35.830 LBA Status Info Alert Notices: Not Supported 00:28:35.830 EGE Aggregate Log Change Notices: Not Supported 00:28:35.830 Normal NVM Subsystem Shutdown event: Not Supported 00:28:35.830 Zone Descriptor Change Notices: Not Supported 00:28:35.830 Discovery Log Change Notices: Not Supported 00:28:35.830 Controller Attributes 00:28:35.830 128-bit Host Identifier: Supported 00:28:35.830 Non-Operational Permissive Mode: Not Supported 00:28:35.830 NVM Sets: Not Supported 00:28:35.830 Read Recovery Levels: Not Supported 00:28:35.830 Endurance Groups: Not Supported 00:28:35.830 Predictable Latency Mode: Not Supported 00:28:35.830 Traffic Based Keep ALive: Supported 00:28:35.830 Namespace Granularity: Not Supported 00:28:35.830 SQ Associations: Not Supported 00:28:35.830 UUID List: Not Supported 00:28:35.830 Multi-Domain Subsystem: Not Supported 00:28:35.830 Fixed Capacity Management: Not Supported 00:28:35.830 Variable Capacity Management: Not Supported 00:28:35.830 Delete Endurance Group: Not Supported 00:28:35.830 Delete NVM Set: Not Supported 00:28:35.830 Extended LBA Formats Supported: Not Supported 00:28:35.830 Flexible Data Placement Supported: Not Supported 00:28:35.830 00:28:35.830 Controller Memory Buffer Support 00:28:35.830 ================================ 00:28:35.830 Supported: No 00:28:35.830 00:28:35.830 Persistent Memory Region Support 00:28:35.830 ================================ 00:28:35.830 Supported: No 00:28:35.830 00:28:35.830 Admin Command Set Attributes 00:28:35.830 ============================ 00:28:35.830 Security Send/Receive: Not Supported 00:28:35.830 Format NVM: Not Supported 00:28:35.830 Firmware Activate/Download: Not Supported 00:28:35.830 Namespace Management: Not Supported 00:28:35.830 Device Self-Test: Not Supported 00:28:35.830 Directives: Not Supported 00:28:35.830 NVMe-MI: Not Supported 00:28:35.830 Virtualization Management: Not Supported 00:28:35.830 Doorbell Buffer Config: Not Supported 00:28:35.830 Get LBA Status Capability: Not Supported 00:28:35.830 Command & Feature Lockdown Capability: Not Supported 00:28:35.830 Abort Command Limit: 4 00:28:35.830 Async Event Request Limit: 4 00:28:35.830 Number of Firmware Slots: N/A 00:28:35.830 Firmware Slot 1 Read-Only: N/A 00:28:35.830 Firmware Activation Without Reset: N/A 00:28:35.830 Multiple Update Detection Support: N/A 00:28:35.830 Firmware Update Granularity: No Information Provided 00:28:35.830 Per-Namespace SMART Log: Yes 00:28:35.830 Asymmetric Namespace Access Log Page: Supported 00:28:35.830 ANA Transition Time : 10 sec 00:28:35.830 00:28:35.830 Asymmetric Namespace Access Capabilities 00:28:35.830 ANA Optimized State : Supported 00:28:35.830 ANA Non-Optimized State : Supported 00:28:35.830 ANA Inaccessible State : Supported 00:28:35.830 ANA Persistent Loss State : Supported 00:28:35.830 ANA Change State : Supported 00:28:35.830 ANAGRPID is not changed : No 00:28:35.830 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:35.830 00:28:35.830 ANA Group Identifier Maximum : 128 00:28:35.830 Number of ANA Group Identifiers : 128 00:28:35.830 Max Number of Allowed Namespaces : 1024 00:28:35.830 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:35.831 Command Effects Log Page: Supported 00:28:35.831 Get Log Page Extended Data: Supported 00:28:35.831 Telemetry Log Pages: Not Supported 00:28:35.831 Persistent Event Log Pages: Not Supported 00:28:35.831 Supported Log Pages Log Page: May Support 00:28:35.831 Commands Supported & Effects Log Page: Not Supported 00:28:35.831 Feature Identifiers & Effects Log Page:May Support 00:28:35.831 NVMe-MI Commands & Effects Log Page: May Support 00:28:35.831 Data Area 4 for Telemetry Log: Not Supported 00:28:35.831 Error Log Page Entries Supported: 128 00:28:35.831 Keep Alive: Supported 00:28:35.831 Keep Alive Granularity: 1000 ms 00:28:35.831 00:28:35.831 NVM Command Set Attributes 00:28:35.831 ========================== 00:28:35.831 Submission Queue Entry Size 00:28:35.831 Max: 64 00:28:35.831 Min: 64 00:28:35.831 Completion Queue Entry Size 00:28:35.831 Max: 16 00:28:35.831 Min: 16 00:28:35.831 Number of Namespaces: 1024 00:28:35.831 Compare Command: Not Supported 00:28:35.831 Write Uncorrectable Command: Not Supported 00:28:35.831 Dataset Management Command: Supported 00:28:35.831 Write Zeroes Command: Supported 00:28:35.831 Set Features Save Field: Not Supported 00:28:35.831 Reservations: Not Supported 00:28:35.831 Timestamp: Not Supported 00:28:35.831 Copy: Not Supported 00:28:35.831 Volatile Write Cache: Present 00:28:35.831 Atomic Write Unit (Normal): 1 00:28:35.831 Atomic Write Unit (PFail): 1 00:28:35.831 Atomic Compare & Write Unit: 1 00:28:35.831 Fused Compare & Write: Not Supported 00:28:35.831 Scatter-Gather List 00:28:35.831 SGL Command Set: Supported 00:28:35.831 SGL Keyed: Not Supported 00:28:35.831 SGL Bit Bucket Descriptor: Not Supported 00:28:35.831 SGL Metadata Pointer: Not Supported 00:28:35.831 Oversized SGL: Not Supported 00:28:35.831 SGL Metadata Address: Not Supported 00:28:35.831 SGL Offset: Supported 00:28:35.831 Transport SGL Data Block: Not Supported 00:28:35.831 Replay Protected Memory Block: Not Supported 00:28:35.831 00:28:35.831 Firmware Slot Information 00:28:35.831 ========================= 00:28:35.831 Active slot: 0 00:28:35.831 00:28:35.831 Asymmetric Namespace Access 00:28:35.831 =========================== 00:28:35.831 Change Count : 0 00:28:35.831 Number of ANA Group Descriptors : 1 00:28:35.831 ANA Group Descriptor : 0 00:28:35.831 ANA Group ID : 1 00:28:35.831 Number of NSID Values : 1 00:28:35.831 Change Count : 0 00:28:35.831 ANA State : 1 00:28:35.831 Namespace Identifier : 1 00:28:35.831 00:28:35.831 Commands Supported and Effects 00:28:35.831 ============================== 00:28:35.831 Admin Commands 00:28:35.831 -------------- 00:28:35.831 Get Log Page (02h): Supported 00:28:35.831 Identify (06h): Supported 00:28:35.831 Abort (08h): Supported 00:28:35.831 Set Features (09h): Supported 00:28:35.831 Get Features (0Ah): Supported 00:28:35.831 Asynchronous Event Request (0Ch): Supported 00:28:35.831 Keep Alive (18h): Supported 00:28:35.831 I/O Commands 00:28:35.831 ------------ 00:28:35.831 Flush (00h): Supported 00:28:35.831 Write (01h): Supported LBA-Change 00:28:35.831 Read (02h): Supported 00:28:35.831 Write Zeroes (08h): Supported LBA-Change 00:28:35.831 Dataset Management (09h): Supported 00:28:35.831 00:28:35.831 Error Log 00:28:35.831 ========= 00:28:35.831 Entry: 0 00:28:35.831 Error Count: 0x3 00:28:35.831 Submission Queue Id: 0x0 00:28:35.831 Command Id: 0x5 00:28:35.831 Phase Bit: 0 00:28:35.831 Status Code: 0x2 00:28:35.831 Status Code Type: 0x0 00:28:35.831 Do Not Retry: 1 00:28:35.831 Error Location: 0x28 00:28:35.831 LBA: 0x0 00:28:35.831 Namespace: 0x0 00:28:35.831 Vendor Log Page: 0x0 00:28:35.831 ----------- 00:28:35.831 Entry: 1 00:28:35.831 Error Count: 0x2 00:28:35.831 Submission Queue Id: 0x0 00:28:35.831 Command Id: 0x5 00:28:35.831 Phase Bit: 0 00:28:35.831 Status Code: 0x2 00:28:35.831 Status Code Type: 0x0 00:28:35.831 Do Not Retry: 1 00:28:35.831 Error Location: 0x28 00:28:35.831 LBA: 0x0 00:28:35.831 Namespace: 0x0 00:28:35.831 Vendor Log Page: 0x0 00:28:35.831 ----------- 00:28:35.831 Entry: 2 00:28:35.831 Error Count: 0x1 00:28:35.831 Submission Queue Id: 0x0 00:28:35.831 Command Id: 0x4 00:28:35.831 Phase Bit: 0 00:28:35.831 Status Code: 0x2 00:28:35.831 Status Code Type: 0x0 00:28:35.831 Do Not Retry: 1 00:28:35.831 Error Location: 0x28 00:28:35.831 LBA: 0x0 00:28:35.831 Namespace: 0x0 00:28:35.831 Vendor Log Page: 0x0 00:28:35.831 00:28:35.831 Number of Queues 00:28:35.831 ================ 00:28:35.831 Number of I/O Submission Queues: 128 00:28:35.831 Number of I/O Completion Queues: 128 00:28:35.831 00:28:35.831 ZNS Specific Controller Data 00:28:35.831 ============================ 00:28:35.831 Zone Append Size Limit: 0 00:28:35.831 00:28:35.831 00:28:35.831 Active Namespaces 00:28:35.831 ================= 00:28:35.831 get_feature(0x05) failed 00:28:35.831 Namespace ID:1 00:28:35.831 Command Set Identifier: NVM (00h) 00:28:35.831 Deallocate: Supported 00:28:35.831 Deallocated/Unwritten Error: Not Supported 00:28:35.831 Deallocated Read Value: Unknown 00:28:35.831 Deallocate in Write Zeroes: Not Supported 00:28:35.831 Deallocated Guard Field: 0xFFFF 00:28:35.831 Flush: Supported 00:28:35.831 Reservation: Not Supported 00:28:35.831 Namespace Sharing Capabilities: Multiple Controllers 00:28:35.831 Size (in LBAs): 3750748848 (1788GiB) 00:28:35.831 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:35.831 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:35.831 UUID: 0d4d115b-fe2f-450c-93f3-c06ad2bd1f98 00:28:35.831 Thin Provisioning: Not Supported 00:28:35.831 Per-NS Atomic Units: Yes 00:28:35.831 Atomic Write Unit (Normal): 8 00:28:35.831 Atomic Write Unit (PFail): 8 00:28:35.831 Preferred Write Granularity: 8 00:28:35.831 Atomic Compare & Write Unit: 8 00:28:35.831 Atomic Boundary Size (Normal): 0 00:28:35.831 Atomic Boundary Size (PFail): 0 00:28:35.831 Atomic Boundary Offset: 0 00:28:35.831 NGUID/EUI64 Never Reused: No 00:28:35.831 ANA group ID: 1 00:28:35.831 Namespace Write Protected: No 00:28:35.831 Number of LBA Formats: 1 00:28:35.831 Current LBA Format: LBA Format #00 00:28:35.831 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:35.831 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.831 rmmod nvme_tcp 00:28:35.831 rmmod nvme_fabrics 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.831 10:46:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:38.373 10:46:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:41.675 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:41.675 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:42.246 00:28:42.246 real 0m19.474s 00:28:42.246 user 0m5.286s 00:28:42.246 sys 0m11.185s 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.246 ************************************ 00:28:42.246 END TEST nvmf_identify_kernel_target 00:28:42.246 ************************************ 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.246 ************************************ 00:28:42.246 START TEST nvmf_auth_host 00:28:42.246 ************************************ 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:42.246 * Looking for test storage... 00:28:42.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:42.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.246 --rc genhtml_branch_coverage=1 00:28:42.246 --rc genhtml_function_coverage=1 00:28:42.246 --rc genhtml_legend=1 00:28:42.246 --rc geninfo_all_blocks=1 00:28:42.246 --rc geninfo_unexecuted_blocks=1 00:28:42.246 00:28:42.246 ' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:42.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.246 --rc genhtml_branch_coverage=1 00:28:42.246 --rc genhtml_function_coverage=1 00:28:42.246 --rc genhtml_legend=1 00:28:42.246 --rc geninfo_all_blocks=1 00:28:42.246 --rc geninfo_unexecuted_blocks=1 00:28:42.246 00:28:42.246 ' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:42.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.246 --rc genhtml_branch_coverage=1 00:28:42.246 --rc genhtml_function_coverage=1 00:28:42.246 --rc genhtml_legend=1 00:28:42.246 --rc geninfo_all_blocks=1 00:28:42.246 --rc geninfo_unexecuted_blocks=1 00:28:42.246 00:28:42.246 ' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:42.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.246 --rc genhtml_branch_coverage=1 00:28:42.246 --rc genhtml_function_coverage=1 00:28:42.246 --rc genhtml_legend=1 00:28:42.246 --rc geninfo_all_blocks=1 00:28:42.246 --rc geninfo_unexecuted_blocks=1 00:28:42.246 00:28:42.246 ' 00:28:42.246 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:42.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:42.507 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.508 10:46:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.645 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:50.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:50.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:50.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:50.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.646 10:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.646 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.646 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.646 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.646 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:28:50.646 00:28:50.646 --- 10.0.0.2 ping statistics --- 00:28:50.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.646 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:28:50.646 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:28:50.646 00:28:50.647 --- 10.0.0.1 ping statistics --- 00:28:50.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.647 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2208131 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2208131 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2208131 ']' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.647 10:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.647 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.647 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:50.647 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.647 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.647 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5082ae39b6aa7bf6c5a364c9fcf74322 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5ld 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5082ae39b6aa7bf6c5a364c9fcf74322 0 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5082ae39b6aa7bf6c5a364c9fcf74322 0 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5082ae39b6aa7bf6c5a364c9fcf74322 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5ld 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5ld 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5ld 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=978541f8d321a99052987ee3104d9c866cd53051e0301a12516730f78037d244 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.W2v 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 978541f8d321a99052987ee3104d9c866cd53051e0301a12516730f78037d244 3 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 978541f8d321a99052987ee3104d9c866cd53051e0301a12516730f78037d244 3 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=978541f8d321a99052987ee3104d9c866cd53051e0301a12516730f78037d244 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.W2v 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.W2v 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.W2v 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=88d8631e618833a6308ae1d2190a4416073186059b717042 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zuK 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 88d8631e618833a6308ae1d2190a4416073186059b717042 0 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 88d8631e618833a6308ae1d2190a4416073186059b717042 0 00:28:50.909 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=88d8631e618833a6308ae1d2190a4416073186059b717042 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zuK 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zuK 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zuK 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:50.910 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=847bb0af5339bb29f4d23bad2e8f5c0a85fe3bcf17cc8c2d 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PmW 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 847bb0af5339bb29f4d23bad2e8f5c0a85fe3bcf17cc8c2d 2 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 847bb0af5339bb29f4d23bad2e8f5c0a85fe3bcf17cc8c2d 2 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=847bb0af5339bb29f4d23bad2e8f5c0a85fe3bcf17cc8c2d 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PmW 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PmW 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PmW 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=47c76771c2643b9805fb228fe1e9ec94 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PJ9 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 47c76771c2643b9805fb228fe1e9ec94 1 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 47c76771c2643b9805fb228fe1e9ec94 1 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=47c76771c2643b9805fb228fe1e9ec94 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PJ9 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PJ9 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PJ9 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4678a0e4ad28100c58f2e148dd4b4b81 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:51.171 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Z5z 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4678a0e4ad28100c58f2e148dd4b4b81 1 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4678a0e4ad28100c58f2e148dd4b4b81 1 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4678a0e4ad28100c58f2e148dd4b4b81 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Z5z 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Z5z 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Z5z 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72f5cdf7782491488bb3c4fa30325e335047a63e49f50e50 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bj3 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72f5cdf7782491488bb3c4fa30325e335047a63e49f50e50 2 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72f5cdf7782491488bb3c4fa30325e335047a63e49f50e50 2 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72f5cdf7782491488bb3c4fa30325e335047a63e49f50e50 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bj3 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bj3 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Bj3 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:51.172 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9d387bb58c030c5808a86da085dbe19 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3hF 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9d387bb58c030c5808a86da085dbe19 0 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9d387bb58c030c5808a86da085dbe19 0 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9d387bb58c030c5808a86da085dbe19 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3hF 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3hF 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3hF 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c6827a65d597f10cc11e5e66a6bfdd0241544a335367b9dbbeacad59942bca88 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3IY 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c6827a65d597f10cc11e5e66a6bfdd0241544a335367b9dbbeacad59942bca88 3 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c6827a65d597f10cc11e5e66a6bfdd0241544a335367b9dbbeacad59942bca88 3 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c6827a65d597f10cc11e5e66a6bfdd0241544a335367b9dbbeacad59942bca88 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3IY 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3IY 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3IY 00:28:51.433 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2208131 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2208131 ']' 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.434 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5ld 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.W2v ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2v 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zuK 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PmW ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PmW 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PJ9 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Z5z ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z5z 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Bj3 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3hF ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3hF 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3IY 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:51.696 10:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:51.696 10:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:51.696 10:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:54.997 Waiting for block devices as requested 00:28:55.258 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:55.258 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:55.258 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:55.258 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:55.520 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:55.520 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:55.520 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:55.781 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:55.781 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:56.043 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:56.043 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:56.043 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:56.304 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:56.304 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:56.304 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:56.304 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:56.565 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:57.507 No valid GPT data, bailing 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:57.507 00:28:57.507 Discovery Log Number of Records 2, Generation counter 2 00:28:57.507 =====Discovery Log Entry 0====== 00:28:57.507 trtype: tcp 00:28:57.507 adrfam: ipv4 00:28:57.507 subtype: current discovery subsystem 00:28:57.507 treq: not specified, sq flow control disable supported 00:28:57.507 portid: 1 00:28:57.507 trsvcid: 4420 00:28:57.507 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:57.507 traddr: 10.0.0.1 00:28:57.507 eflags: none 00:28:57.507 sectype: none 00:28:57.507 =====Discovery Log Entry 1====== 00:28:57.507 trtype: tcp 00:28:57.507 adrfam: ipv4 00:28:57.507 subtype: nvme subsystem 00:28:57.507 treq: not specified, sq flow control disable supported 00:28:57.507 portid: 1 00:28:57.507 trsvcid: 4420 00:28:57.507 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:57.507 traddr: 10.0.0.1 00:28:57.507 eflags: none 00:28:57.507 sectype: none 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.507 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 nvme0n1 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.768 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.769 10:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.769 nvme0n1 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.029 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.030 nvme0n1 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.030 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.290 nvme0n1 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.290 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 nvme0n1 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.550 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:58.811 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.811 10:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.811 nvme0n1 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.811 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.072 nvme0n1 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.072 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.073 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.333 nvme0n1 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.333 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.334 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.594 nvme0n1 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.594 10:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 nvme0n1 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.856 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.117 nvme0n1 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.117 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.118 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.378 nvme0n1 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.378 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.638 10:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.899 nvme0n1 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.899 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.159 nvme0n1 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.159 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.160 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.419 nvme0n1 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.419 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.678 10:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.938 nvme0n1 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.938 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.939 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.509 nvme0n1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.509 10:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.770 nvme0n1 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.770 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.031 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 nvme0n1 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.292 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.293 10:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.863 nvme0n1 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.863 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.864 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.434 nvme0n1 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.434 10:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.006 nvme0n1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.006 10:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.961 nvme0n1 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.961 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.962 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.531 nvme0n1 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.531 10:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.101 nvme0n1 00:29:07.101 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.101 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.101 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.101 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.101 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.361 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.362 10:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 nvme0n1 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.931 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.932 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 nvme0n1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.192 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 nvme0n1 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.453 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.713 nvme0n1 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:08.713 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.714 10:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.974 nvme0n1 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.974 nvme0n1 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.974 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.975 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.975 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.975 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.235 nvme0n1 00:29:09.235 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.757 nvme0n1 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.757 10:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.018 nvme0n1 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.018 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.279 nvme0n1 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.279 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.280 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.280 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.280 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.280 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.540 nvme0n1 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.540 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.541 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.541 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.541 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.541 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.541 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.801 nvme0n1 00:29:10.801 10:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.801 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.802 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.063 nvme0n1 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.063 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.324 nvme0n1 00:29:11.324 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.324 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.324 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.324 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.324 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.584 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.585 10:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.844 nvme0n1 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.844 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.845 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.845 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.845 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.845 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.845 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.104 nvme0n1 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.104 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.674 nvme0n1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.674 10:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.244 nvme0n1 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.244 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.504 nvme0n1 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.504 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.764 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.765 10:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.025 nvme0n1 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.025 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:14.285 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.286 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.547 nvme0n1 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.547 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.808 10:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.378 nvme0n1 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.378 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.379 10:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.044 nvme0n1 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.044 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.045 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.624 nvme0n1 00:29:16.624 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.624 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.624 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.624 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.624 10:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.884 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.885 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.458 nvme0n1 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.458 10:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 nvme0n1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 nvme0n1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.400 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.401 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.662 nvme0n1 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.662 10:46:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.923 nvme0n1 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:18.923 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.924 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.184 nvme0n1 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.184 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.185 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.445 nvme0n1 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.445 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.446 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.707 nvme0n1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.707 10:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 nvme0n1 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.968 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.229 nvme0n1 00:29:20.229 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.229 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.229 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.229 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.229 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.230 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.491 nvme0n1 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.491 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.752 nvme0n1 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.752 10:46:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.752 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.753 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.013 nvme0n1 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.013 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.014 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.274 nvme0n1 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.274 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.535 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.796 nvme0n1 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.796 10:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.796 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.057 nvme0n1 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.057 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.318 nvme0n1 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.318 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:22.579 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.580 10:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.841 nvme0n1 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.841 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.102 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.363 nvme0n1 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.363 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 10:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.885 nvme0n1 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.885 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.457 nvme0n1 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:24.457 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.458 10:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.030 nvme0n1 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFlMzliNmFhN2JmNmM1YTM2NGM5ZmNmNzQzMjJno3Qi: 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTc4NTQxZjhkMzIxYTk5MDUyOTg3ZWUzMTA0ZDljODY2Y2Q1MzA1MWUwMzAxYTEyNTE2NzMwZjc4MDM3ZDI0NDurXrI=: 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.030 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 nvme0n1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.602 10:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.544 nvme0n1 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.544 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.115 nvme0n1 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzJmNWNkZjc3ODI0OTE0ODhiYjNjNGZhMzAzMjVlMzM1MDQ3YTYzZTQ5ZjUwZTUw2yxr1g==: 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTlkMzg3YmI1OGMwMzBjNTgwOGE4NmRhMDg1ZGJlMTlCGkZW: 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.115 10:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.687 nvme0n1 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.687 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzY4MjdhNjVkNTk3ZjEwY2MxMWU1ZTY2YTZiZmRkMDI0MTU0NGEzMzUzNjdiOWRiYmVhY2FkNTk5NDJiY2E4OAQ9NN8=: 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.947 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.519 nvme0n1 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:28.519 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.520 request: 00:29:28.520 { 00:29:28.520 "name": "nvme0", 00:29:28.520 "trtype": "tcp", 00:29:28.520 "traddr": "10.0.0.1", 00:29:28.520 "adrfam": "ipv4", 00:29:28.520 "trsvcid": "4420", 00:29:28.520 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:28.520 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:28.520 "prchk_reftag": false, 00:29:28.520 "prchk_guard": false, 00:29:28.520 "hdgst": false, 00:29:28.520 "ddgst": false, 00:29:28.520 "allow_unrecognized_csi": false, 00:29:28.520 "method": "bdev_nvme_attach_controller", 00:29:28.520 "req_id": 1 00:29:28.520 } 00:29:28.520 Got JSON-RPC error response 00:29:28.520 response: 00:29:28.520 { 00:29:28.520 "code": -5, 00:29:28.520 "message": "Input/output error" 00:29:28.520 } 00:29:28.520 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:28.781 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.782 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:28.782 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.782 10:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.782 request: 00:29:28.782 { 00:29:28.782 "name": "nvme0", 00:29:28.782 "trtype": "tcp", 00:29:28.782 "traddr": "10.0.0.1", 00:29:28.782 "adrfam": "ipv4", 00:29:28.782 "trsvcid": "4420", 00:29:28.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:28.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:28.782 "prchk_reftag": false, 00:29:28.782 "prchk_guard": false, 00:29:28.782 "hdgst": false, 00:29:28.782 "ddgst": false, 00:29:28.782 "dhchap_key": "key2", 00:29:28.782 "allow_unrecognized_csi": false, 00:29:28.782 "method": "bdev_nvme_attach_controller", 00:29:28.782 "req_id": 1 00:29:28.782 } 00:29:28.782 Got JSON-RPC error response 00:29:28.782 response: 00:29:28.782 { 00:29:28.782 "code": -5, 00:29:28.782 "message": "Input/output error" 00:29:28.782 } 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.782 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.782 request: 00:29:28.782 { 00:29:28.782 "name": "nvme0", 00:29:28.782 "trtype": "tcp", 00:29:28.782 "traddr": "10.0.0.1", 00:29:28.782 "adrfam": "ipv4", 00:29:28.782 "trsvcid": "4420", 00:29:28.782 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:28.782 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:28.782 "prchk_reftag": false, 00:29:28.782 "prchk_guard": false, 00:29:28.782 "hdgst": false, 00:29:28.782 "ddgst": false, 00:29:28.782 "dhchap_key": "key1", 00:29:28.782 "dhchap_ctrlr_key": "ckey2", 00:29:28.782 "allow_unrecognized_csi": false, 00:29:28.782 "method": "bdev_nvme_attach_controller", 00:29:28.782 "req_id": 1 00:29:28.782 } 00:29:28.782 Got JSON-RPC error response 00:29:28.782 response: 00:29:28.782 { 00:29:28.782 "code": -5, 00:29:28.782 "message": "Input/output error" 00:29:28.782 } 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.044 nvme0n1 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.044 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.305 request: 00:29:29.305 { 00:29:29.305 "name": "nvme0", 00:29:29.305 "dhchap_key": "key1", 00:29:29.305 "dhchap_ctrlr_key": "ckey2", 00:29:29.305 "method": "bdev_nvme_set_keys", 00:29:29.305 "req_id": 1 00:29:29.305 } 00:29:29.305 Got JSON-RPC error response 00:29:29.305 response: 00:29:29.305 { 00:29:29.305 "code": -13, 00:29:29.305 "message": "Permission denied" 00:29:29.305 } 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:29.305 10:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:30.248 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODhkODYzMWU2MTg4MzNhNjMwOGFlMWQyMTkwYTQ0MTYwNzMxODYwNTliNzE3MDQy/ZfUBQ==: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODQ3YmIwYWY1MzM5YmIyOWY0ZDIzYmFkMmU4ZjVjMGE4NWZlM2JjZjE3Y2M4YzJk02mlBw==: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.630 nvme0n1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDdjNzY3NzFjMjY0M2I5ODA1ZmIyMjhmZTFlOWVjOTQdw0MX: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY3OGEwZTRhZDI4MTAwYzU4ZjJlMTQ4ZGQ0YjRiODFi1NE1: 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.630 request: 00:29:31.630 { 00:29:31.630 "name": "nvme0", 00:29:31.630 "dhchap_key": "key2", 00:29:31.630 "dhchap_ctrlr_key": "ckey1", 00:29:31.630 "method": "bdev_nvme_set_keys", 00:29:31.630 "req_id": 1 00:29:31.630 } 00:29:31.630 Got JSON-RPC error response 00:29:31.630 response: 00:29:31.630 { 00:29:31.630 "code": -13, 00:29:31.630 "message": "Permission denied" 00:29:31.630 } 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.630 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:31.631 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.631 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.631 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.631 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:31.631 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:32.572 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.572 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:32.572 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.572 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.572 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.833 rmmod nvme_tcp 00:29:32.833 rmmod nvme_fabrics 00:29:32.833 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2208131 ']' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2208131 ']' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2208131' 00:29:32.833 killing process with pid 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2208131 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.833 10:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:35.401 10:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:38.704 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:38.704 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:39.275 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5ld /tmp/spdk.key-null.zuK /tmp/spdk.key-sha256.PJ9 /tmp/spdk.key-sha384.Bj3 /tmp/spdk.key-sha512.3IY /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:39.275 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:42.578 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:42.578 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:42.578 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:43.149 00:29:43.149 real 1m0.822s 00:29:43.149 user 0m54.639s 00:29:43.149 sys 0m16.045s 00:29:43.149 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:43.149 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.149 ************************************ 00:29:43.149 END TEST nvmf_auth_host 00:29:43.149 ************************************ 00:29:43.149 10:47:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:43.149 10:47:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.150 ************************************ 00:29:43.150 START TEST nvmf_digest 00:29:43.150 ************************************ 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:43.150 * Looking for test storage... 00:29:43.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:43.150 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.412 --rc genhtml_branch_coverage=1 00:29:43.412 --rc genhtml_function_coverage=1 00:29:43.412 --rc genhtml_legend=1 00:29:43.412 --rc geninfo_all_blocks=1 00:29:43.412 --rc geninfo_unexecuted_blocks=1 00:29:43.412 00:29:43.412 ' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.412 --rc genhtml_branch_coverage=1 00:29:43.412 --rc genhtml_function_coverage=1 00:29:43.412 --rc genhtml_legend=1 00:29:43.412 --rc geninfo_all_blocks=1 00:29:43.412 --rc geninfo_unexecuted_blocks=1 00:29:43.412 00:29:43.412 ' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.412 --rc genhtml_branch_coverage=1 00:29:43.412 --rc genhtml_function_coverage=1 00:29:43.412 --rc genhtml_legend=1 00:29:43.412 --rc geninfo_all_blocks=1 00:29:43.412 --rc geninfo_unexecuted_blocks=1 00:29:43.412 00:29:43.412 ' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.412 --rc genhtml_branch_coverage=1 00:29:43.412 --rc genhtml_function_coverage=1 00:29:43.412 --rc genhtml_legend=1 00:29:43.412 --rc geninfo_all_blocks=1 00:29:43.412 --rc geninfo_unexecuted_blocks=1 00:29:43.412 00:29:43.412 ' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.412 10:47:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.550 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:51.551 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:51.551 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:51.551 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:51.551 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.551 10:47:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:29:51.551 00:29:51.551 --- 10.0.0.2 ping statistics --- 00:29:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.551 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:29:51.551 00:29:51.551 --- 10.0.0.1 ping statistics --- 00:29:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.551 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.551 ************************************ 00:29:51.551 START TEST nvmf_digest_clean 00:29:51.551 ************************************ 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2225112 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2225112 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:51.551 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2225112 ']' 00:29:51.552 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.552 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.552 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.552 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.552 10:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.552 [2024-11-20 10:47:23.214097] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:29:51.552 [2024-11-20 10:47:23.214156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.552 [2024-11-20 10:47:23.312706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.552 [2024-11-20 10:47:23.363215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.552 [2024-11-20 10:47:23.363265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.552 [2024-11-20 10:47:23.363273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.552 [2024-11-20 10:47:23.363281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.552 [2024-11-20 10:47:23.363287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.552 [2024-11-20 10:47:23.364087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.813 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.813 null0 00:29:51.813 [2024-11-20 10:47:24.162623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.074 [2024-11-20 10:47:24.186944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2225374 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2225374 /var/tmp/bperf.sock 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2225374 ']' 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.074 10:47:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:52.074 [2024-11-20 10:47:24.248850] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:29:52.074 [2024-11-20 10:47:24.248916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225374 ] 00:29:52.074 [2024-11-20 10:47:24.340044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.074 [2024-11-20 10:47:24.392722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.017 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.277 nvme0n1 00:29:53.540 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:53.540 10:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.540 Running I/O for 2 seconds... 00:29:55.424 19243.00 IOPS, 75.17 MiB/s [2024-11-20T09:47:27.800Z] 20378.50 IOPS, 79.60 MiB/s 00:29:55.424 Latency(us) 00:29:55.424 [2024-11-20T09:47:27.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.424 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:55.424 nvme0n1 : 2.00 20416.25 79.75 0.00 0.00 6262.13 2635.09 14308.69 00:29:55.424 [2024-11-20T09:47:27.800Z] =================================================================================================================== 00:29:55.424 [2024-11-20T09:47:27.800Z] Total : 20416.25 79.75 0.00 0.00 6262.13 2635.09 14308.69 00:29:55.424 { 00:29:55.424 "results": [ 00:29:55.424 { 00:29:55.424 "job": "nvme0n1", 00:29:55.424 "core_mask": "0x2", 00:29:55.424 "workload": "randread", 00:29:55.424 "status": "finished", 00:29:55.424 "queue_depth": 128, 00:29:55.424 "io_size": 4096, 00:29:55.424 "runtime": 2.004384, 00:29:55.424 "iops": 20416.247585293037, 00:29:55.424 "mibps": 79.75096713005092, 00:29:55.424 "io_failed": 0, 00:29:55.424 "io_timeout": 0, 00:29:55.424 "avg_latency_us": 6262.131094276917, 00:29:55.424 "min_latency_us": 2635.0933333333332, 00:29:55.424 "max_latency_us": 14308.693333333333 00:29:55.424 } 00:29:55.424 ], 00:29:55.424 "core_count": 1 00:29:55.424 } 00:29:55.424 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:55.424 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:55.424 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:55.424 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:55.424 | select(.opcode=="crc32c") 00:29:55.424 | "\(.module_name) \(.executed)"' 00:29:55.424 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2225374 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2225374 ']' 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2225374 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.685 10:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225374 00:29:55.685 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:55.685 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:55.685 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225374' 00:29:55.685 killing process with pid 2225374 00:29:55.685 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2225374 00:29:55.685 Received shutdown signal, test time was about 2.000000 seconds 00:29:55.685 00:29:55.685 Latency(us) 00:29:55.685 [2024-11-20T09:47:28.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.685 [2024-11-20T09:47:28.061Z] =================================================================================================================== 00:29:55.685 [2024-11-20T09:47:28.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.685 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2225374 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2226146 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2226146 /var/tmp/bperf.sock 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2226146 ']' 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.945 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.946 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:55.946 [2024-11-20 10:47:28.179677] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:29:55.946 [2024-11-20 10:47:28.179734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226146 ] 00:29:55.946 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:55.946 Zero copy mechanism will not be used. 00:29:55.946 [2024-11-20 10:47:28.264238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.946 [2024-11-20 10:47:28.294304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.888 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.888 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:56.888 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:56.888 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:56.888 10:47:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:56.888 10:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.888 10:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:57.157 nvme0n1 00:29:57.157 10:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:57.158 10:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:57.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:57.418 Zero copy mechanism will not be used. 00:29:57.418 Running I/O for 2 seconds... 00:29:59.300 3345.00 IOPS, 418.12 MiB/s [2024-11-20T09:47:31.676Z] 3456.50 IOPS, 432.06 MiB/s 00:29:59.300 Latency(us) 00:29:59.300 [2024-11-20T09:47:31.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.300 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:59.300 nvme0n1 : 2.05 3387.33 423.42 0.00 0.00 4632.12 785.07 46093.65 00:29:59.300 [2024-11-20T09:47:31.676Z] =================================================================================================================== 00:29:59.300 [2024-11-20T09:47:31.676Z] Total : 3387.33 423.42 0.00 0.00 4632.12 785.07 46093.65 00:29:59.300 { 00:29:59.300 "results": [ 00:29:59.300 { 00:29:59.300 "job": "nvme0n1", 00:29:59.300 "core_mask": "0x2", 00:29:59.300 "workload": "randread", 00:29:59.300 "status": "finished", 00:29:59.300 "queue_depth": 16, 00:29:59.300 "io_size": 131072, 00:29:59.300 "runtime": 2.045564, 00:29:59.300 "iops": 3387.329851327067, 00:29:59.300 "mibps": 423.4162314158834, 00:29:59.300 "io_failed": 0, 00:29:59.300 "io_timeout": 0, 00:29:59.300 "avg_latency_us": 4632.1172271131, 00:29:59.300 "min_latency_us": 785.0666666666667, 00:29:59.300 "max_latency_us": 46093.653333333335 00:29:59.300 } 00:29:59.300 ], 00:29:59.300 "core_count": 1 00:29:59.300 } 00:29:59.300 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:59.300 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:59.300 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:59.300 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:59.300 | select(.opcode=="crc32c") 00:29:59.300 | "\(.module_name) \(.executed)"' 00:29:59.300 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2226146 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2226146 ']' 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2226146 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226146 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226146' 00:29:59.561 killing process with pid 2226146 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2226146 00:29:59.561 Received shutdown signal, test time was about 2.000000 seconds 00:29:59.561 00:29:59.561 Latency(us) 00:29:59.561 [2024-11-20T09:47:31.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.561 [2024-11-20T09:47:31.937Z] =================================================================================================================== 00:29:59.561 [2024-11-20T09:47:31.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.561 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2226146 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2226834 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2226834 /var/tmp/bperf.sock 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2226834 ']' 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:59.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.821 10:47:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:59.821 [2024-11-20 10:47:32.006746] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:29:59.821 [2024-11-20 10:47:32.006803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226834 ] 00:29:59.821 [2024-11-20 10:47:32.091196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.821 [2024-11-20 10:47:32.120469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.762 10:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.762 10:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:00.762 10:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:00.762 10:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:00.762 10:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:00.762 10:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.762 10:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.333 nvme0n1 00:30:01.333 10:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:01.333 10:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:01.333 Running I/O for 2 seconds... 00:30:03.213 30583.00 IOPS, 119.46 MiB/s [2024-11-20T09:47:35.589Z] 30626.50 IOPS, 119.63 MiB/s 00:30:03.213 Latency(us) 00:30:03.213 [2024-11-20T09:47:35.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.213 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.213 nvme0n1 : 2.00 30639.10 119.68 0.00 0.00 4173.11 2048.00 8519.68 00:30:03.213 [2024-11-20T09:47:35.589Z] =================================================================================================================== 00:30:03.213 [2024-11-20T09:47:35.589Z] Total : 30639.10 119.68 0.00 0.00 4173.11 2048.00 8519.68 00:30:03.213 { 00:30:03.213 "results": [ 00:30:03.213 { 00:30:03.213 "job": "nvme0n1", 00:30:03.213 "core_mask": "0x2", 00:30:03.213 "workload": "randwrite", 00:30:03.213 "status": "finished", 00:30:03.213 "queue_depth": 128, 00:30:03.213 "io_size": 4096, 00:30:03.213 "runtime": 2.003355, 00:30:03.213 "iops": 30639.10290487707, 00:30:03.213 "mibps": 119.68399572217605, 00:30:03.213 "io_failed": 0, 00:30:03.213 "io_timeout": 0, 00:30:03.213 "avg_latency_us": 4173.1110486958505, 00:30:03.213 "min_latency_us": 2048.0, 00:30:03.213 "max_latency_us": 8519.68 00:30:03.213 } 00:30:03.213 ], 00:30:03.213 "core_count": 1 00:30:03.213 } 00:30:03.213 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:03.213 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:03.213 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:03.213 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:03.213 | select(.opcode=="crc32c") 00:30:03.213 | "\(.module_name) \(.executed)"' 00:30:03.213 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2226834 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2226834 ']' 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2226834 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:03.473 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226834 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226834' 00:30:03.474 killing process with pid 2226834 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2226834 00:30:03.474 Received shutdown signal, test time was about 2.000000 seconds 00:30:03.474 00:30:03.474 Latency(us) 00:30:03.474 [2024-11-20T09:47:35.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.474 [2024-11-20T09:47:35.850Z] =================================================================================================================== 00:30:03.474 [2024-11-20T09:47:35.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:03.474 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2226834 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2227522 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2227522 /var/tmp/bperf.sock 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2227522 ']' 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:03.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.735 10:47:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:03.735 [2024-11-20 10:47:35.948743] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:03.735 [2024-11-20 10:47:35.948797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227522 ] 00:30:03.735 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:03.735 Zero copy mechanism will not be used. 00:30:03.735 [2024-11-20 10:47:36.031758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.735 [2024-11-20 10:47:36.060455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.429 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.429 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:04.429 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:04.429 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:04.429 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:04.712 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.712 10:47:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.972 nvme0n1 00:30:05.233 10:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:05.233 10:47:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:05.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.233 Zero copy mechanism will not be used. 00:30:05.233 Running I/O for 2 seconds... 00:30:07.114 6277.00 IOPS, 784.62 MiB/s [2024-11-20T09:47:39.490Z] 6542.50 IOPS, 817.81 MiB/s 00:30:07.114 Latency(us) 00:30:07.114 [2024-11-20T09:47:39.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.114 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:07.114 nvme0n1 : 2.01 6532.31 816.54 0.00 0.00 2443.95 1140.05 9338.88 00:30:07.114 [2024-11-20T09:47:39.490Z] =================================================================================================================== 00:30:07.114 [2024-11-20T09:47:39.490Z] Total : 6532.31 816.54 0.00 0.00 2443.95 1140.05 9338.88 00:30:07.114 { 00:30:07.114 "results": [ 00:30:07.114 { 00:30:07.114 "job": "nvme0n1", 00:30:07.114 "core_mask": "0x2", 00:30:07.114 "workload": "randwrite", 00:30:07.114 "status": "finished", 00:30:07.114 "queue_depth": 16, 00:30:07.114 "io_size": 131072, 00:30:07.114 "runtime": 2.006028, 00:30:07.114 "iops": 6532.311612799023, 00:30:07.114 "mibps": 816.5389515998779, 00:30:07.114 "io_failed": 0, 00:30:07.114 "io_timeout": 0, 00:30:07.114 "avg_latency_us": 2443.952397232397, 00:30:07.114 "min_latency_us": 1140.0533333333333, 00:30:07.114 "max_latency_us": 9338.88 00:30:07.114 } 00:30:07.114 ], 00:30:07.114 "core_count": 1 00:30:07.114 } 00:30:07.114 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:07.114 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:07.114 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:07.114 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:07.114 | select(.opcode=="crc32c") 00:30:07.114 | "\(.module_name) \(.executed)"' 00:30:07.114 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2227522 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2227522 ']' 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2227522 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227522 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227522' 00:30:07.375 killing process with pid 2227522 00:30:07.375 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2227522 00:30:07.375 Received shutdown signal, test time was about 2.000000 seconds 00:30:07.375 00:30:07.375 Latency(us) 00:30:07.375 [2024-11-20T09:47:39.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.376 [2024-11-20T09:47:39.752Z] =================================================================================================================== 00:30:07.376 [2024-11-20T09:47:39.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.376 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2227522 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2225112 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2225112 ']' 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2225112 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225112 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225112' 00:30:07.637 killing process with pid 2225112 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2225112 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2225112 00:30:07.637 00:30:07.637 real 0m16.836s 00:30:07.637 user 0m33.199s 00:30:07.637 sys 0m3.852s 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.637 10:47:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.637 ************************************ 00:30:07.637 END TEST nvmf_digest_clean 00:30:07.637 ************************************ 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.898 ************************************ 00:30:07.898 START TEST nvmf_digest_error 00:30:07.898 ************************************ 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2228432 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2228432 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2228432 ']' 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.898 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.898 [2024-11-20 10:47:40.127979] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:07.898 [2024-11-20 10:47:40.128031] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.898 [2024-11-20 10:47:40.217593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.898 [2024-11-20 10:47:40.248331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.898 [2024-11-20 10:47:40.248360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.898 [2024-11-20 10:47:40.248365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.898 [2024-11-20 10:47:40.248370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.898 [2024-11-20 10:47:40.248374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.898 [2024-11-20 10:47:40.248829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.838 [2024-11-20 10:47:40.958767] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.838 10:47:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.838 null0 00:30:08.838 [2024-11-20 10:47:41.036510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.838 [2024-11-20 10:47:41.060696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2228584 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2228584 /var/tmp/bperf.sock 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2228584 ']' 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:08.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.838 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.838 [2024-11-20 10:47:41.117319] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:08.838 [2024-11-20 10:47:41.117366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228584 ] 00:30:08.838 [2024-11-20 10:47:41.198011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.098 [2024-11-20 10:47:41.227737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.669 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.669 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:09.669 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:09.669 10:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.929 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.189 nvme0n1 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:10.189 10:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.189 Running I/O for 2 seconds... 00:30:10.450 [2024-11-20 10:47:42.584695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.450 [2024-11-20 10:47:42.584727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.450 [2024-11-20 10:47:42.584736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.595682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.595701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.595709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.606201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.606220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.616108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.616125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.616132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.623865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.623883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.623890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.634621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.634638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.634645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.645294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.645311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.645317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.655841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.655864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.655871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.666751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.666768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.666774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.675553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.675570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.675577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.685166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.685183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.685189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.696959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.696975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.696982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.709099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.709115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.709122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.718191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.718208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.718214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.726585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.726602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.726609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.737141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.737161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.737168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.748346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.748363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.748369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.757716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.757733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.757740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.765020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.765037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.765043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.775989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.776006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.776012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.783932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.783949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.783955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.793377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.793394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.793401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.802613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.802630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.802637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.810907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.810930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.451 [2024-11-20 10:47:42.819816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.451 [2024-11-20 10:47:42.819836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.451 [2024-11-20 10:47:42.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.828945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.828962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.828968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.838130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.838146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.838153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.846738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.846755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.846762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.855751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.855768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.855775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.864052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.864069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.864076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.873786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.873803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.882168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.882184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.882190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.892386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.892402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.892409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.901505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.901523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.901529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.911102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.911118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.911125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.919669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.919686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.919692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.928555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.928571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.928577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.936517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.936534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.936541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.946190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.946206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.946213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.955505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.955522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.964922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.964939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.964945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.973167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.973184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.973194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.981911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.981927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.981934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.990601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.990617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.990624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:42.999626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:42.999643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:42.999649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:43.008537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:43.008553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:43.008560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:43.019050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.713 [2024-11-20 10:47:43.019066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.713 [2024-11-20 10:47:43.019072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.713 [2024-11-20 10:47:43.027181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.027197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.027203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.036746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.036763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.047594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.047616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.057647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.057667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.057673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.066964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.066981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.066988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.076391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.076409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.076415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.714 [2024-11-20 10:47:43.084397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.714 [2024-11-20 10:47:43.084414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.714 [2024-11-20 10:47:43.084420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.093751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.093769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.093775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.102313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.102330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.102336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.111387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.111403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.111410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.120860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.120878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.120884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.129070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.129087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.129093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.138401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.138418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.138425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.147529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.147546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.147552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.157246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.157263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.157270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.165915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.165932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.165938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.174417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.174435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.174441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.184337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.184361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.193410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.193427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.193433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.201218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.975 [2024-11-20 10:47:43.201235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.975 [2024-11-20 10:47:43.201241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.975 [2024-11-20 10:47:43.210904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.210922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.210931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.219242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.219259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.219265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.227857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.227874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.227880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.237353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.237371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.237377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.245137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.245155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.245165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.255133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.255150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.267443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.267460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.267466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.276277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.276294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.276300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.284589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.284605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.284612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.293597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.293615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.293621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.305094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.305118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.314923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.314940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.314947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.323078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.323095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.323101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.332976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.332993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.332999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.976 [2024-11-20 10:47:43.341136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:10.976 [2024-11-20 10:47:43.341153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.976 [2024-11-20 10:47:43.341163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.350806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.350824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.350831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.358273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.358291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.358297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.367681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.367698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.367708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.376461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.376478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.376484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.385611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.385628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.385635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.395504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.395521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.395528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.402943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.402960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.402967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.415725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.415743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.415749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.427728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.427745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.427751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.438945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.438962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.438968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.447691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.447708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.447714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.459190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.459211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.459217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.469190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.469207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.469213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.477875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.477892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.477899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.485809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.485826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.485832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.496091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.496108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.496115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.504505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.504522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.504529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.513563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.513579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.513585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.521278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.521295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.521301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.531129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.531146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.531155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.540866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.540883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.540890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.550335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.550352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.550358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.559059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.559076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.559082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 26924.00 IOPS, 105.17 MiB/s [2024-11-20T09:47:43.614Z] [2024-11-20 10:47:43.568124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.568141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.238 [2024-11-20 10:47:43.568148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.238 [2024-11-20 10:47:43.577430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.238 [2024-11-20 10:47:43.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.239 [2024-11-20 10:47:43.577453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.239 [2024-11-20 10:47:43.587498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.239 [2024-11-20 10:47:43.587515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.239 [2024-11-20 10:47:43.587521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.239 [2024-11-20 10:47:43.595680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.239 [2024-11-20 10:47:43.595697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.239 [2024-11-20 10:47:43.595703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.239 [2024-11-20 10:47:43.605360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.239 [2024-11-20 10:47:43.605377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.239 [2024-11-20 10:47:43.605384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.614357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.614379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.614385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.622718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.622735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.622742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.632535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.632553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.632559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.640717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.640734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.649432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.649449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.649455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.659290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.659313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.666927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.666944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.666950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.676055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.676072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.676078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.686505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.686522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.686528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.697299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.697316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.705344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.705362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.705368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.716392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.716410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.500 [2024-11-20 10:47:43.716416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.500 [2024-11-20 10:47:43.725216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.500 [2024-11-20 10:47:43.725233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.725239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.733455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.733472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.733478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.743246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.743262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.743269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.751834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.751852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.751858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.761489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.761507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.761513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.769340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.769357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.769366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.778982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.778999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.779006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.788630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.788648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.788654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.798263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.798280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.807287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.807304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.807310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.815356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.815373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.815380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.825744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.825761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.825767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.833882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.833899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.833905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.843407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.843425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.843431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.852365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.852389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.852396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.861386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.861403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.861410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.501 [2024-11-20 10:47:43.870235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.501 [2024-11-20 10:47:43.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.501 [2024-11-20 10:47:43.870258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.878992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.879010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.879017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.887862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.887879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.887885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.896597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.896614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.896621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.905371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.905388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.905395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.914839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.914856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.914862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.923173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.923191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.923197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.932529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.932546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.941606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.941623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.941629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.950040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.958855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.958873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.958879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.968479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.968496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.978988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.979004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.979011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.986596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.986613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:43.996751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:43.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:43.996775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:44.007368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:44.007389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:44.007395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:44.016299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:44.016316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:44.016322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:44.024937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.763 [2024-11-20 10:47:44.024954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.763 [2024-11-20 10:47:44.024960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.763 [2024-11-20 10:47:44.033236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.033253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.033259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.042280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.042297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.042303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.051275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.051292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.051299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.059556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.059574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.059581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.069879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.069897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.069903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.082230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.082248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.082254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.092311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.092327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.092334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.100160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.100178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.100184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.110307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.110325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.110332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.119994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.120011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.120018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.764 [2024-11-20 10:47:44.127768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:11.764 [2024-11-20 10:47:44.127786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.764 [2024-11-20 10:47:44.127792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.025 [2024-11-20 10:47:44.136927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.025 [2024-11-20 10:47:44.136946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.025 [2024-11-20 10:47:44.136952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.025 [2024-11-20 10:47:44.147674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.147691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.147698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.156165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.156182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.166571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.166588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.166598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.176923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.176941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.176948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.185961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.185979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.194688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.194705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.194712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.203917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.203934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.203941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.212802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.212818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.212825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.222020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.222037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.222043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.230319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.230337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.230343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.239021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.239038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.239045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.246181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.246201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.246208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.257679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.257696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.257702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.267080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.267103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.276796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.276813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.276820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.284886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.284903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.284911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.294446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.294463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.294470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.302904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.302921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.302927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.311730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.311747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.311753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.321573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.321590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.321597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.329553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.329571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.329577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.339448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.339466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.339472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.349166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.349183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.357678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.357695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.357702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.366262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.366279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.366285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.374982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.374999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.026 [2024-11-20 10:47:44.375005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.026 [2024-11-20 10:47:44.384672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.026 [2024-11-20 10:47:44.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.027 [2024-11-20 10:47:44.384697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.027 [2024-11-20 10:47:44.392379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.027 [2024-11-20 10:47:44.392396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.027 [2024-11-20 10:47:44.392402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.401278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.401296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.401306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.410661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.410679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.410685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.421447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.421464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.421471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.428588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.428606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.428612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.438826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.438844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.438851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.449971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.449989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.449995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.458833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.458850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.458856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.466896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.466913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.466920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.476484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.476500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.476507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.483552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.483569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.483576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.494383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.494407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.504133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.504151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.504157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.512552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.512570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.512576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.521217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.521234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.521240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.530420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.530437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.530443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.539332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.539349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.539355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.548697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.548715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.548721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.556299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.556316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.556326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 [2024-11-20 10:47:44.565402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11cc5c0) 00:30:12.288 [2024-11-20 10:47:44.565419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.288 [2024-11-20 10:47:44.565426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.288 27441.00 IOPS, 107.19 MiB/s 00:30:12.288 Latency(us) 00:30:12.288 [2024-11-20T09:47:44.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:12.288 nvme0n1 : 2.00 27447.03 107.21 0.00 0.00 4658.18 2184.53 19770.03 00:30:12.288 [2024-11-20T09:47:44.664Z] =================================================================================================================== 00:30:12.288 [2024-11-20T09:47:44.664Z] Total : 27447.03 107.21 0.00 0.00 4658.18 2184.53 19770.03 00:30:12.288 { 00:30:12.288 "results": [ 00:30:12.288 { 00:30:12.288 "job": "nvme0n1", 00:30:12.288 "core_mask": "0x2", 00:30:12.288 "workload": "randread", 00:30:12.288 "status": "finished", 00:30:12.288 "queue_depth": 128, 00:30:12.288 "io_size": 4096, 00:30:12.289 "runtime": 2.004224, 00:30:12.289 "iops": 27447.03186869332, 00:30:12.289 "mibps": 107.21496823708328, 00:30:12.289 "io_failed": 0, 00:30:12.289 "io_timeout": 0, 00:30:12.289 "avg_latency_us": 4658.175937950676, 00:30:12.289 "min_latency_us": 2184.5333333333333, 00:30:12.289 "max_latency_us": 19770.02666666667 00:30:12.289 } 00:30:12.289 ], 00:30:12.289 "core_count": 1 00:30:12.289 } 00:30:12.289 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:12.289 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:12.289 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:12.289 | .driver_specific 00:30:12.289 | .nvme_error 00:30:12.289 | .status_code 00:30:12.289 | .command_transient_transport_error' 00:30:12.289 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:12.549 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2228584 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2228584 ']' 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2228584 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228584 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228584' 00:30:12.550 killing process with pid 2228584 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2228584 00:30:12.550 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.550 00:30:12.550 Latency(us) 00:30:12.550 [2024-11-20T09:47:44.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.550 [2024-11-20T09:47:44.926Z] =================================================================================================================== 00:30:12.550 [2024-11-20T09:47:44.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.550 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2228584 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2229288 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2229288 /var/tmp/bperf.sock 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2229288 ']' 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.810 10:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.810 [2024-11-20 10:47:45.016677] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:12.810 [2024-11-20 10:47:45.016735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229288 ] 00:30:12.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.810 Zero copy mechanism will not be used. 00:30:12.810 [2024-11-20 10:47:45.100116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.810 [2024-11-20 10:47:45.130331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.751 10:47:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.013 nvme0n1 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:14.013 10:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.013 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:14.013 Zero copy mechanism will not be used. 00:30:14.013 Running I/O for 2 seconds... 00:30:14.013 [2024-11-20 10:47:46.376289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.013 [2024-11-20 10:47:46.376322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.013 [2024-11-20 10:47:46.376331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.013 [2024-11-20 10:47:46.380904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.013 [2024-11-20 10:47:46.380931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.013 [2024-11-20 10:47:46.380943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.013 [2024-11-20 10:47:46.385538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.013 [2024-11-20 10:47:46.385557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.013 [2024-11-20 10:47:46.385564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.274 [2024-11-20 10:47:46.397422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.274 [2024-11-20 10:47:46.397443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.274 [2024-11-20 10:47:46.397450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.274 [2024-11-20 10:47:46.409244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.409273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.421436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.421458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.421468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.432785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.432810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.432820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.445297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.445317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.445324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.457167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.457191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.469199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.469218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.469225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.481354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.481372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.481379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.493229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.493250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.493258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.505501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.505521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.505528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.517197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.517216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.527696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.527717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.527724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.540136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.540170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.551751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.551772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.564122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.564141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.574174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.574193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.580137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.580165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.580172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.584438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.584457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.584464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.590330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.590350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.590357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.594451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.594470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.594477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.603818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.603849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.608236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.608255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.608262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.612531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.612557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.617869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.617889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.617895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.622051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.622070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.622076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.626155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.626178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.630282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.630302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.275 [2024-11-20 10:47:46.630309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.275 [2024-11-20 10:47:46.634064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.275 [2024-11-20 10:47:46.634085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.276 [2024-11-20 10:47:46.634091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.276 [2024-11-20 10:47:46.639451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.276 [2024-11-20 10:47:46.639472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.276 [2024-11-20 10:47:46.639478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.276 [2024-11-20 10:47:46.643781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.276 [2024-11-20 10:47:46.643801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.276 [2024-11-20 10:47:46.643809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.648016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.648037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.648044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.654031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.654053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.654062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.658423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.658442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.658449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.662544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.662565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.662571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.666553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.666573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.666580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.670554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.670575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.670582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.674624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.674645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.674651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.678704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.678724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.678735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.684072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.684094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.684102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.688693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.688713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.688720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.696978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.696997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.697004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.703991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.704011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.704018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.711122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.711142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.720134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.720155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.726269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.726289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.726296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.730345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.730365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.730372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.734946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.734970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.734979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.738750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.738770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.738777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.743060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.743088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.751187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.751214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.757380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.757400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.757406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.762131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.762153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.762170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.764767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.764786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.539 [2024-11-20 10:47:46.764792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.539 [2024-11-20 10:47:46.770935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.539 [2024-11-20 10:47:46.770955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.770961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.777759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.777779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.777786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.784241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.784261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.788713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.788733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.794451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.794471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.794478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.801993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.802020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.810970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.810991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.810998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.818761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.818780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.818787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.826072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.826093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.826099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.832913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.832932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.832939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.839511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.839531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.839542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.844619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.844639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.844646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.848635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.848660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.848667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.852472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.852492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.852498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.857052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.857072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.857079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.861840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.861860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.861866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.866029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.866049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.866055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.874946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.874967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.874975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.879319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.879339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.879346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.883736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.883767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.888809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.888829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.888836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.893840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.893860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.893867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.902816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.902836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.902842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.540 [2024-11-20 10:47:46.909295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.540 [2024-11-20 10:47:46.909315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.540 [2024-11-20 10:47:46.909321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.914938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.914958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.914964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.921100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.921124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.921130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.926042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.926062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.926068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.930525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.930545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.930552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.938742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.938761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.938768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.944899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.944919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.944926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.951230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.951249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.951256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.958942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.958961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.958968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.963901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.963922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.963928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.971731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.971752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.971759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.978022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.978043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.978050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.984724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.984743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.984750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.991809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.991828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.991838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:46.997761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:46.997781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:46.997788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.001832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.001851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.001858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.006132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.006151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.006164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.010410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.010430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.010436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.014236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.014256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.023654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.023674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.023681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.029736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.029755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.029762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.032758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.032776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.032783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.038748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.038766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.044625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.044643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.052608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.052628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.052634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.057051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.057069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.057076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.061352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.061370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.061377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.069372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.069389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.069396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.075762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.075781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.075788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.082032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.082050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.082057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.087228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.087246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.087257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.092710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.092729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.092736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.098149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.098174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.804 [2024-11-20 10:47:47.098184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.804 [2024-11-20 10:47:47.105194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.804 [2024-11-20 10:47:47.105213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.105220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.110377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.110395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.110402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.114722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.114741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.114747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.118762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.118781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.118787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.122611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.122630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.122637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.130025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.130048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.130055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.136118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.136142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.136149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.140338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.140356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.140363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.144230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.144249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.144255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.148233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.148251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.148258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.152123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.152141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.152148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.156845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.156864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.156871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.168251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.168270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.168277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.805 [2024-11-20 10:47:47.173610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:14.805 [2024-11-20 10:47:47.173632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.805 [2024-11-20 10:47:47.173641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.067 [2024-11-20 10:47:47.182917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.067 [2024-11-20 10:47:47.182937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.067 [2024-11-20 10:47:47.182944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.067 [2024-11-20 10:47:47.187927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.067 [2024-11-20 10:47:47.187949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.067 [2024-11-20 10:47:47.187956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.067 [2024-11-20 10:47:47.193853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.067 [2024-11-20 10:47:47.193873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.067 [2024-11-20 10:47:47.193880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.067 [2024-11-20 10:47:47.200679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.067 [2024-11-20 10:47:47.200698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.067 [2024-11-20 10:47:47.200705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.206872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.206892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.206898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.212866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.212886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.212892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.217627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.217647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.217654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.225496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.225517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.225523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.229731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.229754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.229765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.232803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.232823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.232833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.236857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.236877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.236883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.240751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.240772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.240781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.244872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.244898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.248667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.248687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.248693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.252561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.252581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.252588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.256448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.256468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.256475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.260540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.260568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.264023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.264043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.264049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.267868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.267887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.267897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.271666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.271686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.271692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.275540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.275560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.275566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.280293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.280316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.280326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.291924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.291944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.291950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.303212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.303232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.068 [2024-11-20 10:47:47.303238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 10:47:47.314705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.068 [2024-11-20 10:47:47.314726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.314733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.325624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.325645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.336499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.336519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.336529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.347280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.347307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.358104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.358124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.358131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 4803.00 IOPS, 600.38 MiB/s [2024-11-20T09:47:47.445Z] [2024-11-20 10:47:47.367730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.367749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.371725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.371745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.371751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.377555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.377574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.377581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.380368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.380390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.384999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.385017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.385024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.392587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.392606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.392612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.396828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.396850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.396857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.400612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.400630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.400636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.404331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.404349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.404355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.408500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.408518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.408524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.412176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.412194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.412200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.415850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.415868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.415875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.419847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.419865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.419871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.423984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.424003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.424011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.428049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.428067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.428074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.432870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.432889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.432896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.069 [2024-11-20 10:47:47.439121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.069 [2024-11-20 10:47:47.439139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.069 [2024-11-20 10:47:47.439146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.446058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.446077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.446083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.452364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.452382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.452389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.458952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.458971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.458978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.465164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.465182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.465189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.471739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.471756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.471763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.478183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.478200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.478207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.483776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.483794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.483804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.491052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.491070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.331 [2024-11-20 10:47:47.491076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.331 [2024-11-20 10:47:47.497660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.331 [2024-11-20 10:47:47.497678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.497685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.503120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.503139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.503145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.509090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.509112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.509123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.516289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.516309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.516316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.521876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.521902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.525671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.525691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.533173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.533192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.533199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.537647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.537670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.537677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.541951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.541971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.541978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.549989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.550009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.550016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.554558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.554585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.558548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.558568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.558575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.562286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.562305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.562311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.566264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.566284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.566290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.570272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.570299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.574139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.578123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.578143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.578149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.582304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.582323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.582329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.586371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.586390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.586397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.590454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.590473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.590480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.594298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.594317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.594323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.597866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.597887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.597894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.602051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.602071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.602077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.605951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.605971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.605977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.611538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.611561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.611568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.617018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.617039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.617045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.620888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.332 [2024-11-20 10:47:47.620909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.332 [2024-11-20 10:47:47.620920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.332 [2024-11-20 10:47:47.623805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.623824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.623830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.630455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.630479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.635414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.635433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.635439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.638884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.638904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.638910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.642672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.642691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.642697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.646784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.646804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.646811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.654407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.654434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.661792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.661817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.666825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.666845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.666852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.671327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.671345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.671352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.675346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.675366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.675372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.684031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.684053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.684059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.687750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.687776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.691688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.691708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.691715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.694467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.694489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.694504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.697941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.697960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.697966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.333 [2024-11-20 10:47:47.701923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.333 [2024-11-20 10:47:47.701942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.333 [2024-11-20 10:47:47.701948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.708111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.708131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.708138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.714920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.714940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.714946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.720560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.720584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.725594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.725613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.725619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.729916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.729934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.729941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.734194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.734212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.734219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.737637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.737660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.737667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.744497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.744518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.744525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.750500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.750518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.750525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.754716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.754735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.754741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.758505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.758530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.758541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.762743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.762762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.762768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.770740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.770758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.770765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.777515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.777535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.781934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.781954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.781961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.785796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.785817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.785826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.790175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.790196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.790203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.797578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.797598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.797604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.804465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.804484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.804492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.808606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.808626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.808633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.812386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.598 [2024-11-20 10:47:47.812406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.598 [2024-11-20 10:47:47.812412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.598 [2024-11-20 10:47:47.816109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.816129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.820550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.820570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.820576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.826568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.826587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.830348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.830368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.830375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.836777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.836798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.836805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.842742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.842763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.842770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.848226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.848246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.848253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.855794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.855814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.855821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.862600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.862619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.862626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.867506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.867529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.867536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.871978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.871997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.872006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.876241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.876260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.876267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.882287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.882306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.882313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.888512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.888531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.888538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.892725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.892751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.892759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.899676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.899696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.899702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.903787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.903807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.903814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.907601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.907621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.907628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.911699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.911719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.911725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.915588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.915607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.915617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.922702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.922721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.922728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.928613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.928633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.928639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.932449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.932469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.932475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.936354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.936382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.940137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.940156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.940170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.944045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.944065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.944072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.599 [2024-11-20 10:47:47.948930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.599 [2024-11-20 10:47:47.948950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.599 [2024-11-20 10:47:47.948957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.600 [2024-11-20 10:47:47.953286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.600 [2024-11-20 10:47:47.953304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.600 [2024-11-20 10:47:47.953311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.600 [2024-11-20 10:47:47.962040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.600 [2024-11-20 10:47:47.962064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.600 [2024-11-20 10:47:47.962071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.600 [2024-11-20 10:47:47.966695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.600 [2024-11-20 10:47:47.966715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.600 [2024-11-20 10:47:47.966723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.970908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.970928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.974925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.974944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.974951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.982692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.982711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.986804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.986828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.986835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.991092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.991111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.991118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:47.995443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:47.995463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:47.995470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.003963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.003989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.010550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.010569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.010576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.016292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.016315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.016322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.022449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.022470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.022479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.027177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.027195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.027202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.030246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.030265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.030271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.036863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.036881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.045147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.045173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.045180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.056228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.056247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.056253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.066596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.066615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.066625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.077981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.077999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.865 [2024-11-20 10:47:48.078006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.865 [2024-11-20 10:47:48.089328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.865 [2024-11-20 10:47:48.089347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.089354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.101304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.101322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.108275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.108294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.108300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.114348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.114367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.114374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.118007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.118027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.118034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.121980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.121998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.122005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.125871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.125890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.125896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.129741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.129763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.129770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.133787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.133807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.133813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.138871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.138890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.138897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.142823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.142843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.142850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.146200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.146229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.149003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.149022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.149030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.155572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.155592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.155599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.163947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.163967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.163973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.167856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.167876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.167886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.174738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.174758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.174764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.180948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.180968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.180975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.187811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.187830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.187837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.194851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.194871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.194878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.200529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.200548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.200555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.206727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.206746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.206753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.211472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.211491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.211498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.866 [2024-11-20 10:47:48.218187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.866 [2024-11-20 10:47:48.218206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.866 [2024-11-20 10:47:48.218212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.867 [2024-11-20 10:47:48.226352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.867 [2024-11-20 10:47:48.226372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.867 [2024-11-20 10:47:48.226381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.867 [2024-11-20 10:47:48.233921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:15.867 [2024-11-20 10:47:48.233941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.867 [2024-11-20 10:47:48.233948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.240353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.240380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.246121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.246141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.246148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.253958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.253978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.253985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.259032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.259054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.259062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.269456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.269477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.269484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.276028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.276047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.276054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.282493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.282511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.282519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.288947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.288967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.288973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.295406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.295425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.304556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.304575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.304582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.311237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.311257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.311263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.317616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.317635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.317642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.324703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.324723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.324729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.332818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.332837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.332844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.341095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.341114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.341121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.345345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.345364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.131 [2024-11-20 10:47:48.345374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.131 [2024-11-20 10:47:48.349240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.131 [2024-11-20 10:47:48.349259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.132 [2024-11-20 10:47:48.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:16.132 [2024-11-20 10:47:48.353164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.132 [2024-11-20 10:47:48.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.132 [2024-11-20 10:47:48.353190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:16.132 [2024-11-20 10:47:48.360983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.132 [2024-11-20 10:47:48.361003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.132 [2024-11-20 10:47:48.361009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:16.132 [2024-11-20 10:47:48.367400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8ca10) 00:30:16.132 [2024-11-20 10:47:48.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.132 [2024-11-20 10:47:48.367428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:16.132 5228.00 IOPS, 653.50 MiB/s 00:30:16.132 Latency(us) 00:30:16.132 [2024-11-20T09:47:48.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:16.132 nvme0n1 : 2.00 5230.12 653.76 0.00 0.00 3056.54 501.76 15291.73 00:30:16.132 [2024-11-20T09:47:48.508Z] =================================================================================================================== 00:30:16.132 [2024-11-20T09:47:48.508Z] Total : 5230.12 653.76 0.00 0.00 3056.54 501.76 15291.73 00:30:16.132 { 00:30:16.132 "results": [ 00:30:16.132 { 00:30:16.132 "job": "nvme0n1", 00:30:16.132 "core_mask": "0x2", 00:30:16.132 "workload": "randread", 00:30:16.132 "status": "finished", 00:30:16.132 "queue_depth": 16, 00:30:16.132 "io_size": 131072, 00:30:16.132 "runtime": 2.00225, 00:30:16.132 "iops": 5230.1161193657135, 00:30:16.132 "mibps": 653.7645149207142, 00:30:16.132 "io_failed": 0, 00:30:16.132 "io_timeout": 0, 00:30:16.132 "avg_latency_us": 3056.535207537561, 00:30:16.132 "min_latency_us": 501.76, 00:30:16.132 "max_latency_us": 15291.733333333334 00:30:16.132 } 00:30:16.132 ], 00:30:16.132 "core_count": 1 00:30:16.132 } 00:30:16.132 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:16.132 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:16.132 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:16.132 | .driver_specific 00:30:16.132 | .nvme_error 00:30:16.132 | .status_code 00:30:16.132 | .command_transient_transport_error' 00:30:16.132 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 338 > 0 )) 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2229288 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2229288 ']' 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2229288 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229288 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229288' 00:30:16.394 killing process with pid 2229288 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2229288 00:30:16.394 Received shutdown signal, test time was about 2.000000 seconds 00:30:16.394 00:30:16.394 Latency(us) 00:30:16.394 [2024-11-20T09:47:48.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.394 [2024-11-20T09:47:48.770Z] =================================================================================================================== 00:30:16.394 [2024-11-20T09:47:48.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2229288 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2230089 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2230089 /var/tmp/bperf.sock 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2230089 ']' 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.394 10:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:16.656 [2024-11-20 10:47:48.784885] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:16.656 [2024-11-20 10:47:48.784943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230089 ] 00:30:16.656 [2024-11-20 10:47:48.869206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.656 [2024-11-20 10:47:48.898053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.232 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.232 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:17.232 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:17.232 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:17.492 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:17.492 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.493 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.493 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.493 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.493 10:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.753 nvme0n1 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:18.014 10:47:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:18.014 Running I/O for 2 seconds... 00:30:18.014 [2024-11-20 10:47:50.256088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f1868 00:30:18.014 [2024-11-20 10:47:50.257013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.257041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.264754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e4de8 00:30:18.014 [2024-11-20 10:47:50.265685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.265704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.272596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eaab8 00:30:18.014 [2024-11-20 10:47:50.273398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.281357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ebb98 00:30:18.014 [2024-11-20 10:47:50.282161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.282183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.289981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e12d8 00:30:18.014 [2024-11-20 10:47:50.290813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.290830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.298442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fcdd0 00:30:18.014 [2024-11-20 10:47:50.299221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.014 [2024-11-20 10:47:50.299238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.014 [2024-11-20 10:47:50.306888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f20d8 00:30:18.014 [2024-11-20 10:47:50.307664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.307680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.315346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eff18 00:30:18.015 [2024-11-20 10:47:50.316153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.323776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166edd58 00:30:18.015 [2024-11-20 10:47:50.324600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.324616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.332246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fa7d8 00:30:18.015 [2024-11-20 10:47:50.333062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.333078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.340688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e4578 00:30:18.015 [2024-11-20 10:47:50.341499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.341516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.349112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e23b8 00:30:18.015 [2024-11-20 10:47:50.349917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.349933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.357547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8088 00:30:18.015 [2024-11-20 10:47:50.358370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.358386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.366024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc560 00:30:18.015 [2024-11-20 10:47:50.366842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.366859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.374435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fdeb0 00:30:18.015 [2024-11-20 10:47:50.375226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.015 [2024-11-20 10:47:50.382839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f1868 00:30:18.015 [2024-11-20 10:47:50.383654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.015 [2024-11-20 10:47:50.383670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.391252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ef6a8 00:30:18.276 [2024-11-20 10:47:50.391941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.276 [2024-11-20 10:47:50.391957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.399118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:18.276 [2024-11-20 10:47:50.399776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.276 [2024-11-20 10:47:50.399792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.408522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f5be8 00:30:18.276 [2024-11-20 10:47:50.409460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.276 [2024-11-20 10:47:50.409477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.416958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6738 00:30:18.276 [2024-11-20 10:47:50.417840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.276 [2024-11-20 10:47:50.417857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.425379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4b08 00:30:18.276 [2024-11-20 10:47:50.426307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.276 [2024-11-20 10:47:50.426324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.276 [2024-11-20 10:47:50.433797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3a28 00:30:18.277 [2024-11-20 10:47:50.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.442274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e23b8 00:30:18.277 [2024-11-20 10:47:50.443162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.443179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.450701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ec408 00:30:18.277 [2024-11-20 10:47:50.451641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.451657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.459113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8088 00:30:18.277 [2024-11-20 10:47:50.460034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.460051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.467530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e1b48 00:30:18.277 [2024-11-20 10:47:50.468442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.475945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc560 00:30:18.277 [2024-11-20 10:47:50.476820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.476837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.484372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd640 00:30:18.277 [2024-11-20 10:47:50.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.485296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.492816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e88f8 00:30:18.277 [2024-11-20 10:47:50.493740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.493757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.501233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8618 00:30:18.277 [2024-11-20 10:47:50.502151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.502175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.509642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:18.277 [2024-11-20 10:47:50.510566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.510583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.518055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ddc00 00:30:18.277 [2024-11-20 10:47:50.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.518976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.526481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dece0 00:30:18.277 [2024-11-20 10:47:50.527442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.527459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.534911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dfdc0 00:30:18.277 [2024-11-20 10:47:50.535836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.535853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.543351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0ea0 00:30:18.277 [2024-11-20 10:47:50.544277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.551775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6020 00:30:18.277 [2024-11-20 10:47:50.552699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.552716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.560208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6b70 00:30:18.277 [2024-11-20 10:47:50.561143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.561164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.568632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f46d0 00:30:18.277 [2024-11-20 10:47:50.569519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.569535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.577068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f35f0 00:30:18.277 [2024-11-20 10:47:50.577998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.585506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e27f0 00:30:18.277 [2024-11-20 10:47:50.586432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.593934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ebfd0 00:30:18.277 [2024-11-20 10:47:50.594870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.594887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.602521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e7c50 00:30:18.277 [2024-11-20 10:47:50.603462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.603478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.610931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e1710 00:30:18.277 [2024-11-20 10:47:50.611847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.611864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.277 [2024-11-20 10:47:50.619392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc128 00:30:18.277 [2024-11-20 10:47:50.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.277 [2024-11-20 10:47:50.620346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.278 [2024-11-20 10:47:50.627819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8d30 00:30:18.278 [2024-11-20 10:47:50.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.278 [2024-11-20 10:47:50.628752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.278 [2024-11-20 10:47:50.636257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8a50 00:30:18.278 [2024-11-20 10:47:50.637182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.278 [2024-11-20 10:47:50.637198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.278 [2024-11-20 10:47:50.644677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7970 00:30:18.278 [2024-11-20 10:47:50.645618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.278 [2024-11-20 10:47:50.645634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.653093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6890 00:30:18.540 [2024-11-20 10:47:50.654021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.654037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.661502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de8a8 00:30:18.540 [2024-11-20 10:47:50.662431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.662448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.669917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df988 00:30:18.540 [2024-11-20 10:47:50.670855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.670872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.678359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0a68 00:30:18.540 [2024-11-20 10:47:50.679278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.679295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.686857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f5be8 00:30:18.540 [2024-11-20 10:47:50.687775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.687791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.695267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6738 00:30:18.540 [2024-11-20 10:47:50.696180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.540 [2024-11-20 10:47:50.696196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.540 [2024-11-20 10:47:50.703671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4b08 00:30:18.540 [2024-11-20 10:47:50.704606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.704623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.712095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3a28 00:30:18.541 [2024-11-20 10:47:50.713014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.713030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.720525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e23b8 00:30:18.541 [2024-11-20 10:47:50.721470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.721486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.728950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ec408 00:30:18.541 [2024-11-20 10:47:50.729866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.729882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.737378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8088 00:30:18.541 [2024-11-20 10:47:50.738309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.738325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.745800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e1b48 00:30:18.541 [2024-11-20 10:47:50.746719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.746735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.754221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc560 00:30:18.541 [2024-11-20 10:47:50.755132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.755149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.762634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd640 00:30:18.541 [2024-11-20 10:47:50.763553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.763569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.771057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e88f8 00:30:18.541 [2024-11-20 10:47:50.771964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.771981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.779503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8618 00:30:18.541 [2024-11-20 10:47:50.780425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.780442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.787907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:18.541 [2024-11-20 10:47:50.788824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.788841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.796320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ddc00 00:30:18.541 [2024-11-20 10:47:50.797231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.797251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.804724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dece0 00:30:18.541 [2024-11-20 10:47:50.805649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.805666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.813140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dfdc0 00:30:18.541 [2024-11-20 10:47:50.814061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.814078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.821570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0ea0 00:30:18.541 [2024-11-20 10:47:50.822486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.822502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.830005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6020 00:30:18.541 [2024-11-20 10:47:50.830928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.830944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.838417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6b70 00:30:18.541 [2024-11-20 10:47:50.839352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.846842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f46d0 00:30:18.541 [2024-11-20 10:47:50.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.847782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.855275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f35f0 00:30:18.541 [2024-11-20 10:47:50.856198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.856214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.863702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e27f0 00:30:18.541 [2024-11-20 10:47:50.864628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.864645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.872121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ebfd0 00:30:18.541 [2024-11-20 10:47:50.873017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.873034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.880534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e7c50 00:30:18.541 [2024-11-20 10:47:50.881444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.888940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e1710 00:30:18.541 [2024-11-20 10:47:50.889859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.889876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.897364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc128 00:30:18.541 [2024-11-20 10:47:50.898286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.898302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.541 [2024-11-20 10:47:50.905783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8d30 00:30:18.541 [2024-11-20 10:47:50.906719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.541 [2024-11-20 10:47:50.906736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.914191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8a50 00:30:18.804 [2024-11-20 10:47:50.915095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.922587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7970 00:30:18.804 [2024-11-20 10:47:50.923517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.923533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.930992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6890 00:30:18.804 [2024-11-20 10:47:50.931975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.931992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.939405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de8a8 00:30:18.804 [2024-11-20 10:47:50.940321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.940338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.947818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df988 00:30:18.804 [2024-11-20 10:47:50.948741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.948757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.956231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0a68 00:30:18.804 [2024-11-20 10:47:50.957147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.957166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.964652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f5be8 00:30:18.804 [2024-11-20 10:47:50.965578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.965594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.973043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6738 00:30:18.804 [2024-11-20 10:47:50.973949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.973966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.981439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4b08 00:30:18.804 [2024-11-20 10:47:50.982371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.982387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.989851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3a28 00:30:18.804 [2024-11-20 10:47:50.990766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.990782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:50.998293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e23b8 00:30:18.804 [2024-11-20 10:47:50.999224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:50.999241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.006756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ec408 00:30:18.804 [2024-11-20 10:47:51.007683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.015150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8088 00:30:18.804 [2024-11-20 10:47:51.016067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.016086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.023551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e1b48 00:30:18.804 [2024-11-20 10:47:51.024470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.024486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.031954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fc560 00:30:18.804 [2024-11-20 10:47:51.032892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.032909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.040410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd640 00:30:18.804 [2024-11-20 10:47:51.041364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.041380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.048872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e88f8 00:30:18.804 [2024-11-20 10:47:51.049810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.804 [2024-11-20 10:47:51.049827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.804 [2024-11-20 10:47:51.057281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8618 00:30:18.805 [2024-11-20 10:47:51.058199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.058215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.065721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:18.805 [2024-11-20 10:47:51.066614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.074124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ddc00 00:30:18.805 [2024-11-20 10:47:51.075060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.075076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.082558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dece0 00:30:18.805 [2024-11-20 10:47:51.083478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.083494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.090972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166dfdc0 00:30:18.805 [2024-11-20 10:47:51.091853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.091870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.099388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0ea0 00:30:18.805 [2024-11-20 10:47:51.100306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.100322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.107800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6020 00:30:18.805 [2024-11-20 10:47:51.108709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.108726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.116198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6b70 00:30:18.805 [2024-11-20 10:47:51.117116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.117132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.124595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f46d0 00:30:18.805 [2024-11-20 10:47:51.125533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.134156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f35f0 00:30:18.805 [2024-11-20 10:47:51.135391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.135407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.140338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df550 00:30:18.805 [2024-11-20 10:47:51.140971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.140987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.149751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7100 00:30:18.805 [2024-11-20 10:47:51.150552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.150569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.158183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de038 00:30:18.805 [2024-11-20 10:47:51.158945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.158960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.166584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ea248 00:30:18.805 [2024-11-20 10:47:51.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.167384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:18.805 [2024-11-20 10:47:51.174991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eb328 00:30:18.805 [2024-11-20 10:47:51.175818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.805 [2024-11-20 10:47:51.175835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.183417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ff3c8 00:30:19.066 [2024-11-20 10:47:51.184182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.184198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.191836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5ec8 00:30:19.066 [2024-11-20 10:47:51.192599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.192616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.200240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f2d80 00:30:19.066 [2024-11-20 10:47:51.201015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.201032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.208641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3e60 00:30:19.066 [2024-11-20 10:47:51.209405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.209421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.217045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4f40 00:30:19.066 [2024-11-20 10:47:51.217846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.217862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.225451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de8a8 00:30:19.066 [2024-11-20 10:47:51.226251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.226268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.233862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df988 00:30:19.066 [2024-11-20 10:47:51.234693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.234712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.242264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0a68 00:30:19.066 29959.00 IOPS, 117.03 MiB/s [2024-11-20T09:47:51.442Z] [2024-11-20 10:47:51.243233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.243248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.250757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f5be8 00:30:19.066 [2024-11-20 10:47:51.251561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.251578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.259149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6738 00:30:19.066 [2024-11-20 10:47:51.259949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.259966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.267557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e9168 00:30:19.066 [2024-11-20 10:47:51.268369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.268386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.275982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8e88 00:30:19.066 [2024-11-20 10:47:51.276760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.066 [2024-11-20 10:47:51.276776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.066 [2024-11-20 10:47:51.284405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7da8 00:30:19.066 [2024-11-20 10:47:51.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.285216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.292799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ddc00 00:30:19.067 [2024-11-20 10:47:51.293558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.301198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e9e10 00:30:19.067 [2024-11-20 10:47:51.301992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.302008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.309620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eaef0 00:30:19.067 [2024-11-20 10:47:51.310397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.318030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166feb58 00:30:19.067 [2024-11-20 10:47:51.318838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.318855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.326509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.067 [2024-11-20 10:47:51.327282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.327299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.334931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e23b8 00:30:19.067 [2024-11-20 10:47:51.335718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.335734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.343333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3a28 00:30:19.067 [2024-11-20 10:47:51.344128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.344144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.351734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4b08 00:30:19.067 [2024-11-20 10:47:51.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.352568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.360152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de470 00:30:19.067 [2024-11-20 10:47:51.360952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.360968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.368568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df550 00:30:19.067 [2024-11-20 10:47:51.369370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.369386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.376978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0630 00:30:19.067 [2024-11-20 10:47:51.377784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.377800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.385382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f57b0 00:30:19.067 [2024-11-20 10:47:51.386178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.393768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6300 00:30:19.067 [2024-11-20 10:47:51.394572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.394588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.402175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e8d30 00:30:19.067 [2024-11-20 10:47:51.402964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.402980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.410575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8a50 00:30:19.067 [2024-11-20 10:47:51.411378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.411394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.418985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7970 00:30:19.067 [2024-11-20 10:47:51.419797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.419813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.427379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:19.067 [2024-11-20 10:47:51.428173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.428189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.067 [2024-11-20 10:47:51.435778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de038 00:30:19.067 [2024-11-20 10:47:51.436581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.067 [2024-11-20 10:47:51.436597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.328 [2024-11-20 10:47:51.444181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ea248 00:30:19.328 [2024-11-20 10:47:51.444965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.328 [2024-11-20 10:47:51.444981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.328 [2024-11-20 10:47:51.452611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eb328 00:30:19.328 [2024-11-20 10:47:51.453374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.328 [2024-11-20 10:47:51.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.328 [2024-11-20 10:47:51.461027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ff3c8 00:30:19.328 [2024-11-20 10:47:51.461824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.461840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.469428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5ec8 00:30:19.329 [2024-11-20 10:47:51.470230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.470246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.477818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f2d80 00:30:19.329 [2024-11-20 10:47:51.478611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.478628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.486232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f3e60 00:30:19.329 [2024-11-20 10:47:51.487037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.487053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.494630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f4f40 00:30:19.329 [2024-11-20 10:47:51.495434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.495451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.503038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166de8a8 00:30:19.329 [2024-11-20 10:47:51.503799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.503815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.511443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166df988 00:30:19.329 [2024-11-20 10:47:51.512221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.512237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.519852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e0a68 00:30:19.329 [2024-11-20 10:47:51.520652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.520668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.528254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f5be8 00:30:19.329 [2024-11-20 10:47:51.529069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.529085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.536664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e6738 00:30:19.329 [2024-11-20 10:47:51.537427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.537443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.545078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e9168 00:30:19.329 [2024-11-20 10:47:51.545892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.545908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.553523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f8e88 00:30:19.329 [2024-11-20 10:47:51.554320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.554337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.561930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7da8 00:30:19.329 [2024-11-20 10:47:51.562727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.562744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.570332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166ddc00 00:30:19.329 [2024-11-20 10:47:51.571129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.571146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.578729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e9e10 00:30:19.329 [2024-11-20 10:47:51.579507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.579523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.587154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166eaef0 00:30:19.329 [2024-11-20 10:47:51.587973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.587988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.595572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166feb58 00:30:19.329 [2024-11-20 10:47:51.596350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.596366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.604100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.329 [2024-11-20 10:47:51.604898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.604914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.612511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.329 [2024-11-20 10:47:51.613313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.613330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.620895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.329 [2024-11-20 10:47:51.621692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.621708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.629305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.329 [2024-11-20 10:47:51.630100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.329 [2024-11-20 10:47:51.630116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.329 [2024-11-20 10:47:51.637723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.638536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.638553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.646184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.646984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.647000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.654582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.655388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.662993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.663785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.663801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.671384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.672173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.679796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.680582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.688207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.689007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.689023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.330 [2024-11-20 10:47:51.696611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.330 [2024-11-20 10:47:51.697412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.330 [2024-11-20 10:47:51.697429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.704997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.705801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.705817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.713415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.714222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.721827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.722637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.722653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.730232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.731027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.731043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.738658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.739444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.739460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.747063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.747862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.747878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.755456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.756243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.756260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.763851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.764616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.764633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.772276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.773068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.773084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.780692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.781487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.781503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.789091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.789916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.797495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.798283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.805878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.806687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.806703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.814285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.815073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.815089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.822694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.823486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.823503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.831094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.831905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.831921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.839489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.840277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.840293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.847880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.848673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.848690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.856294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.857041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.857057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.864716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.865531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.865548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.873147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.873942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.881566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.882376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.882392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.889972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.890778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.890797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.898382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.899035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.899051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.906786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.592 [2024-11-20 10:47:51.907596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.592 [2024-11-20 10:47:51.907612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.592 [2024-11-20 10:47:51.915237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.916032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.916048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.593 [2024-11-20 10:47:51.923655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.924458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.924474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.593 [2024-11-20 10:47:51.932075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.932883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.932901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.593 [2024-11-20 10:47:51.940480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.941263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.941280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.593 [2024-11-20 10:47:51.948898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.949696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.949713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.593 [2024-11-20 10:47:51.957314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.593 [2024-11-20 10:47:51.958098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.593 [2024-11-20 10:47:51.958115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.854 [2024-11-20 10:47:51.965729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.854 [2024-11-20 10:47:51.966543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.854 [2024-11-20 10:47:51.966560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.854 [2024-11-20 10:47:51.974144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.854 [2024-11-20 10:47:51.974950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.854 [2024-11-20 10:47:51.974967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:51.982560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:51.983369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:51.983385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:51.990956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:51.991750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:51.991767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:51.999398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.000148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.000168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.007819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.008589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.008605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.016241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.017036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.017052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.024660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.025474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.025491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.033085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.033873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.033889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.041513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.042318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.042334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.049935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.050740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.050757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.058364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.059161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.059177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.066786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.067578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.067595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.075218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.076007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.076023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.083635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.084408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.084425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.092057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.092866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.092883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.100496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.101260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.101277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.108909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.109697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.109716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.117319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.118109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.118125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.125724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.126518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.126534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.134136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.134938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.134954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.142635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.143430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.143447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.151057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e5a90 00:30:19.855 [2024-11-20 10:47:52.151852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.151869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.159595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd208 00:30:19.855 [2024-11-20 10:47:52.160251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.160267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.167919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd208 00:30:19.855 [2024-11-20 10:47:52.168681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.168698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.176364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd208 00:30:19.855 [2024-11-20 10:47:52.177167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.177183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.184776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fd208 00:30:19.855 [2024-11-20 10:47:52.185597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.855 [2024-11-20 10:47:52.185614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:19.855 [2024-11-20 10:47:52.193479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f6890 00:30:19.856 [2024-11-20 10:47:52.194005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.856 [2024-11-20 10:47:52.194021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:19.856 [2024-11-20 10:47:52.203037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f20d8 00:30:19.856 [2024-11-20 10:47:52.204273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.856 [2024-11-20 10:47:52.204289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:19.856 [2024-11-20 10:47:52.209818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e27f0 00:30:19.856 [2024-11-20 10:47:52.210379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.856 [2024-11-20 10:47:52.210396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:19.856 [2024-11-20 10:47:52.218227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166e01f8 00:30:19.856 [2024-11-20 10:47:52.218658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.856 [2024-11-20 10:47:52.218674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:20.117 [2024-11-20 10:47:52.227969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fa3a0 00:30:20.117 [2024-11-20 10:47:52.229087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.117 [2024-11-20 10:47:52.229103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:20.117 [2024-11-20 10:47:52.236549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166f7538 00:30:20.117 [2024-11-20 10:47:52.237674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.117 [2024-11-20 10:47:52.237691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:20.117 [2024-11-20 10:47:52.244951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81520) with pdu=0x2000166fa3a0 00:30:20.117 30174.50 IOPS, 117.87 MiB/s [2024-11-20T09:47:52.493Z] [2024-11-20 10:47:52.245942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:20.117 [2024-11-20 10:47:52.245957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:20.117 00:30:20.117 Latency(us) 00:30:20.117 [2024-11-20T09:47:52.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.117 nvme0n1 : 2.00 30188.51 117.92 0.00 0.00 4234.54 2075.31 15619.41 00:30:20.117 [2024-11-20T09:47:52.493Z] =================================================================================================================== 00:30:20.117 [2024-11-20T09:47:52.493Z] Total : 30188.51 117.92 0.00 0.00 4234.54 2075.31 15619.41 00:30:20.117 { 00:30:20.117 "results": [ 00:30:20.117 { 00:30:20.117 "job": "nvme0n1", 00:30:20.117 "core_mask": "0x2", 00:30:20.117 "workload": "randwrite", 00:30:20.117 "status": "finished", 00:30:20.117 "queue_depth": 128, 00:30:20.117 "io_size": 4096, 00:30:20.117 "runtime": 2.004869, 00:30:20.117 "iops": 30188.50608194351, 00:30:20.117 "mibps": 117.92385188259183, 00:30:20.117 "io_failed": 0, 00:30:20.117 "io_timeout": 0, 00:30:20.117 "avg_latency_us": 4234.542622871368, 00:30:20.117 "min_latency_us": 2075.306666666667, 00:30:20.117 "max_latency_us": 15619.413333333334 00:30:20.117 } 00:30:20.117 ], 00:30:20.117 "core_count": 1 00:30:20.117 } 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:20.117 | .driver_specific 00:30:20.117 | .nvme_error 00:30:20.117 | .status_code 00:30:20.117 | .command_transient_transport_error' 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2230089 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2230089 ']' 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2230089 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.117 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230089 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230089' 00:30:20.378 killing process with pid 2230089 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2230089 00:30:20.378 Received shutdown signal, test time was about 2.000000 seconds 00:30:20.378 00:30:20.378 Latency(us) 00:30:20.378 [2024-11-20T09:47:52.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.378 [2024-11-20T09:47:52.754Z] =================================================================================================================== 00:30:20.378 [2024-11-20T09:47:52.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2230089 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2230940 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2230940 /var/tmp/bperf.sock 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2230940 ']' 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:20.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.378 10:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.378 [2024-11-20 10:47:52.666364] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:20.378 [2024-11-20 10:47:52.666421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230940 ] 00:30:20.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.378 Zero copy mechanism will not be used. 00:30:20.378 [2024-11-20 10:47:52.746937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.639 [2024-11-20 10:47:52.776028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.209 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.210 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:21.210 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:21.210 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.471 10:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.732 nvme0n1 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:21.732 10:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.993 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:21.993 Zero copy mechanism will not be used. 00:30:21.993 Running I/O for 2 seconds... 00:30:21.994 [2024-11-20 10:47:54.141757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.142036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.142061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.149325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.149574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.156278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.156508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.156525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.161448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.161717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.161736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.170114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.170397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.170415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.176929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.177227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.177245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.183026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.183329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.183346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.192222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.192527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.192544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.199697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.199981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.200002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.210726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.211057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.211074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.221154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.221256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.221273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.226309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.226377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.226392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.233530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.233832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.233849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.240453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.240667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.240683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.247730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.247840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.247857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.255652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.255710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.255725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.261788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.261856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.261871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.267647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.267879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.267895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.275098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.275199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.275215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.284168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.284461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.284478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.290482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.290730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.290747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.297676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.297951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.297968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.304805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.305088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.305105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.314562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.314852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.314869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.320665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.320940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.325640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.994 [2024-11-20 10:47:54.325702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.994 [2024-11-20 10:47:54.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.994 [2024-11-20 10:47:54.332307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.995 [2024-11-20 10:47:54.332359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.995 [2024-11-20 10:47:54.332375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.995 [2024-11-20 10:47:54.339038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.995 [2024-11-20 10:47:54.339324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.995 [2024-11-20 10:47:54.339342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.995 [2024-11-20 10:47:54.347066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.995 [2024-11-20 10:47:54.347133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.995 [2024-11-20 10:47:54.347149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.995 [2024-11-20 10:47:54.353638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.995 [2024-11-20 10:47:54.353685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.995 [2024-11-20 10:47:54.353701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.995 [2024-11-20 10:47:54.362502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:21.995 [2024-11-20 10:47:54.362797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.995 [2024-11-20 10:47:54.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.373174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.373531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.384428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.384694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.384711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.395745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.395989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.396005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.407215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.407440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.418573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.418791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.429588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.429683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.429700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.441005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.441156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.441176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.452569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.452852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.452869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.463590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.463815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.463831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.474610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.474916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.474933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.485977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.486280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.257 [2024-11-20 10:47:54.486297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.257 [2024-11-20 10:47:54.497388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.257 [2024-11-20 10:47:54.497683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.497700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.508904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.509213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.509230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.520492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.520720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.520735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.529408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.529630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.534187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.534487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.534503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.543196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.543267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.543283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.548567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.548773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.548789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.556674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.556953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.556969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.563874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.563946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.563961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.571389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.571507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.576661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.576951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.576968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.583409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.583619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.583635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.587558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.587850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.587867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.595931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.596173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.596189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.604495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.604740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.604757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.609817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.610111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.610127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.616608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.624167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.624266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.624281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.258 [2024-11-20 10:47:54.628144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.258 [2024-11-20 10:47:54.628420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.258 [2024-11-20 10:47:54.628440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.520 [2024-11-20 10:47:54.634473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.520 [2024-11-20 10:47:54.634538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.634553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.639135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.639211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.639227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.645870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.646180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.646197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.655243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.655553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.662846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.663143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.663164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.670474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.670527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.670543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.677299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.677576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.677592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.682128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.682219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.688821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.689122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.689139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.695848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.695916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.695931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.700550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.700617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.709681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.709930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.709947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.720676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.721006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.731199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.731493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.741746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.742005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.742021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.751870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.752190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.752207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.761849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.762137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.762154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.772251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.772514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.772531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.782253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.782518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.782535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.792242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.792550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.792567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.802900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.803222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.803239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.813059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.813379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.813396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.823218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.823552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.823568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.833274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.833491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.833507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.841657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.841859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.841875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.846468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.846590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.846609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.521 [2024-11-20 10:47:54.853126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.521 [2024-11-20 10:47:54.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.521 [2024-11-20 10:47:54.853265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.858379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.858658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.858675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.864832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.865054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.865070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.871713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.871897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.871913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.878327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.878501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.878518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.883531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.883600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.883616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.886676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.886839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.889487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.889651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.889667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.522 [2024-11-20 10:47:54.892166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.522 [2024-11-20 10:47:54.892321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.522 [2024-11-20 10:47:54.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.894815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.894987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.895003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.897740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.897894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.897909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.900527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.900686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.900702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.903196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.903388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.903404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.906381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.906565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.908904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.909054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.909070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.911404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.911551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.911567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.914255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.914418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.914434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.917364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.917514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.917529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.920013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.920155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.920176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.923008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.923197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.923213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.927680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.927726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.927742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.933844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.784 [2024-11-20 10:47:54.934136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.784 [2024-11-20 10:47:54.934153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.784 [2024-11-20 10:47:54.943169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.943407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.943424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:54.952975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.953238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.953262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:54.962838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.963128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.963145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:54.972396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.972644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.972665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:54.982395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.982665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.982682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:54.992251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:54.992579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:54.992596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.001284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.001519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.011623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.011872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.011890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.021639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.021749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.021765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.026531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.026673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.026689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.029758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.029899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.029915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.032690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.032821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.032837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.035684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.035818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.035834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.038387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.038517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.038533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.041210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.041344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.041360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.044132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.044269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.044285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.046980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.047109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.047125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.049446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.049579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.049595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.051948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.052077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.052093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.054439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.054570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.054585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.056929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.057062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.057078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.059423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.059571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.059588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.062603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.062768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.062784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.065273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.065402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.065418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.067954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.068080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.068096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.074091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.074369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.074387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.078452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.785 [2024-11-20 10:47:55.078581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.785 [2024-11-20 10:47:55.078597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.785 [2024-11-20 10:47:55.084288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.084568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.092099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.092376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.092393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.100511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.100771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.100792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.110280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.110521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.120596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.120848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.130725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.130907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.130923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.786 4394.00 IOPS, 549.25 MiB/s [2024-11-20T09:47:55.162Z] [2024-11-20 10:47:55.141165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.141409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.141425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.786 [2024-11-20 10:47:55.151567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:22.786 [2024-11-20 10:47:55.151790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.786 [2024-11-20 10:47:55.151807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.049 [2024-11-20 10:47:55.160554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.049 [2024-11-20 10:47:55.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.049 [2024-11-20 10:47:55.160900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.049 [2024-11-20 10:47:55.167189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.049 [2024-11-20 10:47:55.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.049 [2024-11-20 10:47:55.167338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.049 [2024-11-20 10:47:55.170607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.170738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.170754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.174076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.174212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.174228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.178330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.178462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.182307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.182452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.186691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.186821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.190449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.190602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.190619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.194550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.194678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.194694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.199422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.199550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.199566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.203021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.203155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.203176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.206805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.206936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.206954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.210576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.210708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.210725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.214355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.214485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.214501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.218114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.218250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.218267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.222578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.222713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.222728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.227135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.227277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.227293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.231374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.231503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.235053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.235184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.235200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.239390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.239521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.239537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.242883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.243016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.243038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.246049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.246188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.246204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.248797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.248926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.248942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.251305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.251437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.251453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.253826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.050 [2024-11-20 10:47:55.253958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.050 [2024-11-20 10:47:55.253974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.050 [2024-11-20 10:47:55.257248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.257376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.257392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.262113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.262253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.262269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.265398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.265532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.265549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.268043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.268181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.268197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.270841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.270971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.270987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.273394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.273541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.275882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.276018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.276034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.278355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.278500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.280789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.280922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.280938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.283205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.283333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.283349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.285975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.286106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.286122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.290570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.290697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.290713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.294564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.294698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.294717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.297506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.297637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.297653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.299982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.300116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.300132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.302449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.302579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.302595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.304899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.305033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.305049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.307349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.307482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.307498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.309817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.309950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.309965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.312515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.312640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.312656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.317202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.317354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.319614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.319745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.319763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.322044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.322182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.051 [2024-11-20 10:47:55.322199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.051 [2024-11-20 10:47:55.325041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.051 [2024-11-20 10:47:55.325178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.325195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.329437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.329567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.329583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.331870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.332005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.332021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.334779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.334964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.334980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.337962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.338096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.338112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.340480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.340615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.340631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.342973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.343106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.343122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.345449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.345597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.348225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.348389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.351417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.351546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.351563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.353866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.354015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.356478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.356610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.356626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.359131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.359274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.359291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.367325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.367686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.367704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.375277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.375516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.375534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.385426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.385658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.385680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.395532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.395806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.395823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.405255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.405567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.415381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.415544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.415561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.052 [2024-11-20 10:47:55.419600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.052 [2024-11-20 10:47:55.419730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.052 [2024-11-20 10:47:55.419746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.422746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.422876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.422892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.425373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.425505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.425521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.428135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.428293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.431000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.431130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.431147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.433621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.433750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.433770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.436222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.436353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.436369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.438740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.438871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.438887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.441246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.441375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.441391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.443735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.443866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.443882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.446235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.446365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.446381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.448716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.448845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.448861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.451155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.451289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.315 [2024-11-20 10:47:55.451305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.315 [2024-11-20 10:47:55.453582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.315 [2024-11-20 10:47:55.453713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.453729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.455996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.456124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.456140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.458805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.459005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.461828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.461959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.461975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.464274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.464401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.464417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.466855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.466948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.466963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.471949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.472080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.472096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.474439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.474572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.474589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.476915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.477043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.477060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.479747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.479876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.479892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.482986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.483116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.483132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.490327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.490455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.490471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.497617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.497875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.497893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.506447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.506646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.506662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.510689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.510839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.510855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.513920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.514054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.514070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.516567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.516704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.516720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.519219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.519351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.519367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.521742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.521875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.521894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.525089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.525266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.525283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.527783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.527929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.530303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.530436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.530452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.532768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.532919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.535539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.535749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.538787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.538917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.538933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.541294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.541423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.541439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.543774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.543907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.316 [2024-11-20 10:47:55.543923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.316 [2024-11-20 10:47:55.546777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.316 [2024-11-20 10:47:55.546973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.546989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.549672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.549806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.549822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.552126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.552261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.552278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.554961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.555109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.555125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.559205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.559512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.559530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.569296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.569685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.569702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.579138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.579534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.579552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.589187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.589432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.589448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.599367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.599633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.609730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.609968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.609984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.620607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.620809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.631912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.632104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.642085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.642201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.652419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.652627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.662427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.662678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.662694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.671269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.671563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.671581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.317 [2024-11-20 10:47:55.681207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.317 [2024-11-20 10:47:55.681426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.317 [2024-11-20 10:47:55.681442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.688758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.688872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.688892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.691690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.691742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.691758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.694434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.694498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.694513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.697155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.697215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.697230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.700085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.700177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.700192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.702873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.702937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.702952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.705831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.708841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.580 [2024-11-20 10:47:55.708905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-20 10:47:55.708921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.580 [2024-11-20 10:47:55.711601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.711650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.714253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.714337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.714352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.716734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.716779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.716795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.719189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.719241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.719257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.722007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.722076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.722092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.725243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.725291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.725307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.727675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.727738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.727754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.730145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.730199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.730215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.732602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.732663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.732679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.735051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.735102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.735118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.737507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.737562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.737578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.739916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.739970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.739986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.742322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.742371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.742387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.745125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.745174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.745189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.751414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.751485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.751501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.754541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.754611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.754626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.759278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.759582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.759599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.768916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.769191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.769208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.778588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.778935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.787797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.788093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.788111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.797970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.798304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.808595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.808836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.808852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.818386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.818637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.818653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.828804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.829048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.829065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.838652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.838924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.849037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.849302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.849318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.581 [2024-11-20 10:47:55.857900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.581 [2024-11-20 10:47:55.858114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-20 10:47:55.858131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.868599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.868837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.868854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.877226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.877349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.877365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.881898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.881951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.881967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.884951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.884995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.885011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.887767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.887811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.887827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.890621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.890682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.893263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.893319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.893335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.895736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.895796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.898235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.898283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.898299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.900724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.900769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.900785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.903227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.903270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.903285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.906244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.906296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.906312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.911681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.911722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.911737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.914145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.914202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.914217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.916647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.916707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.919189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.919252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.919269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.921710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.921771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.921787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.924958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.925069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.925088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.928045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.928102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.928118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.932293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.936878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.936930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.936946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.939874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.939918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.939933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.944810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.944856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.948406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.948459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.948475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.582 [2024-11-20 10:47:55.951020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.582 [2024-11-20 10:47:55.951066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.582 [2024-11-20 10:47:55.951081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.953598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.953643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.953659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.956302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.956383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.959815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.959894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.959909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.962491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.962535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.962550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.964997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.965044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.965059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.967482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.967533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.967549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.969959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.970007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.972448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.972496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.972511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.974913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.974980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.977381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.977440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.977456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.979848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.979907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.979922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.982297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.982345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.982361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.984843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.984886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.984901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.988962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.989009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.989024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.993711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.993771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.993787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.996364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.996412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.996428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:55.998807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:55.998866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:55.998882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.001676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.001782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.001798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.004687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.004742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.007146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.007202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.007218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.009862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.009971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.009988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.013026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.013089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.013104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.845 [2024-11-20 10:47:56.015533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.845 [2024-11-20 10:47:56.015580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.845 [2024-11-20 10:47:56.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.018368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.018464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.018479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.021909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.021966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.021981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.024661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.024718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.024734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.028096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.028149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.028169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.030852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.030943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.033984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.034045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.034061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.036456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.036509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.036525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.038891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.038943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.038959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.041359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.041414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.041429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.044213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.044257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.044273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.049738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.049784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.049800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.052215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.052268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.052284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.054621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.054669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.054684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.057080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.057143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.057163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.060396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.060505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.060520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.063141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.063211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.063227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.065569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.065624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.065639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.067999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.068051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.068067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.071076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.071166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.071181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.074883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.074985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.075001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.080578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.080640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.080656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.083023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.083084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.083101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.085783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.085827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.085843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.092561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.092609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.092624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.095764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.095834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.095850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.098913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.098957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.846 [2024-11-20 10:47:56.098973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.846 [2024-11-20 10:47:56.102020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.846 [2024-11-20 10:47:56.102069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.102084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.104746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.104790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.104806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.107668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.107721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.107737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.110227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.110279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.110295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.114344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.114423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.114439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.119997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.120061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.120077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.122773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.122822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.122837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.125331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.125377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.125394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.127816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.127889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.130346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.130396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.130412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.133278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.133364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.847 [2024-11-20 10:47:56.137622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.137696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.137711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.847 6025.50 IOPS, 753.19 MiB/s [2024-11-20T09:47:56.223Z] [2024-11-20 10:47:56.142627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f81860) with pdu=0x2000166ff3c8 00:30:23.847 [2024-11-20 10:47:56.142705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.847 [2024-11-20 10:47:56.142724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.847 00:30:23.847 Latency(us) 00:30:23.847 [2024-11-20T09:47:56.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.847 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:23.847 nvme0n1 : 2.00 6026.82 753.35 0.00 0.00 2651.51 1153.71 12124.16 00:30:23.847 [2024-11-20T09:47:56.223Z] =================================================================================================================== 00:30:23.847 [2024-11-20T09:47:56.223Z] Total : 6026.82 753.35 0.00 0.00 2651.51 1153.71 12124.16 00:30:23.847 { 00:30:23.847 "results": [ 00:30:23.847 { 00:30:23.847 "job": "nvme0n1", 00:30:23.847 "core_mask": "0x2", 00:30:23.847 "workload": "randwrite", 00:30:23.847 "status": "finished", 00:30:23.847 "queue_depth": 16, 00:30:23.847 "io_size": 131072, 00:30:23.847 "runtime": 2.002716, 00:30:23.847 "iops": 6026.8155844363355, 00:30:23.847 "mibps": 753.3519480545419, 00:30:23.847 "io_failed": 0, 00:30:23.847 "io_timeout": 0, 00:30:23.847 "avg_latency_us": 2651.506743993372, 00:30:23.847 "min_latency_us": 1153.7066666666667, 00:30:23.847 "max_latency_us": 12124.16 00:30:23.847 } 00:30:23.847 ], 00:30:23.847 "core_count": 1 00:30:23.847 } 00:30:23.847 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:23.847 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:23.847 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:23.847 | .driver_specific 00:30:23.847 | .nvme_error 00:30:23.847 | .status_code 00:30:23.847 | .command_transient_transport_error' 00:30:23.847 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 390 > 0 )) 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2230940 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2230940 ']' 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2230940 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230940 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230940' 00:30:24.108 killing process with pid 2230940 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2230940 00:30:24.108 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.108 00:30:24.108 Latency(us) 00:30:24.108 [2024-11-20T09:47:56.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.108 [2024-11-20T09:47:56.484Z] =================================================================================================================== 00:30:24.108 [2024-11-20T09:47:56.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.108 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2230940 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2228432 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2228432 ']' 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2228432 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228432 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228432' 00:30:24.369 killing process with pid 2228432 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2228432 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2228432 00:30:24.369 00:30:24.369 real 0m16.636s 00:30:24.369 user 0m32.837s 00:30:24.369 sys 0m3.702s 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.369 ************************************ 00:30:24.369 END TEST nvmf_digest_error 00:30:24.369 ************************************ 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.369 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.630 rmmod nvme_tcp 00:30:24.630 rmmod nvme_fabrics 00:30:24.630 rmmod nvme_keyring 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2228432 ']' 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2228432 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2228432 ']' 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2228432 00:30:24.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2228432) - No such process 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2228432 is not found' 00:30:24.630 Process with pid 2228432 is not found 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.630 10:47:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.547 10:47:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.547 00:30:26.547 real 0m43.570s 00:30:26.547 user 1m8.279s 00:30:26.547 sys 0m13.351s 00:30:26.547 10:47:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.547 10:47:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:26.547 ************************************ 00:30:26.547 END TEST nvmf_digest 00:30:26.547 ************************************ 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.809 ************************************ 00:30:26.809 START TEST nvmf_bdevperf 00:30:26.809 ************************************ 00:30:26.809 10:47:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:26.809 * Looking for test storage... 00:30:26.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.809 --rc genhtml_branch_coverage=1 00:30:26.809 --rc genhtml_function_coverage=1 00:30:26.809 --rc genhtml_legend=1 00:30:26.809 --rc geninfo_all_blocks=1 00:30:26.809 --rc geninfo_unexecuted_blocks=1 00:30:26.809 00:30:26.809 ' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.809 --rc genhtml_branch_coverage=1 00:30:26.809 --rc genhtml_function_coverage=1 00:30:26.809 --rc genhtml_legend=1 00:30:26.809 --rc geninfo_all_blocks=1 00:30:26.809 --rc geninfo_unexecuted_blocks=1 00:30:26.809 00:30:26.809 ' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.809 --rc genhtml_branch_coverage=1 00:30:26.809 --rc genhtml_function_coverage=1 00:30:26.809 --rc genhtml_legend=1 00:30:26.809 --rc geninfo_all_blocks=1 00:30:26.809 --rc geninfo_unexecuted_blocks=1 00:30:26.809 00:30:26.809 ' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.809 --rc genhtml_branch_coverage=1 00:30:26.809 --rc genhtml_function_coverage=1 00:30:26.809 --rc genhtml_legend=1 00:30:26.809 --rc geninfo_all_blocks=1 00:30:26.809 --rc geninfo_unexecuted_blocks=1 00:30:26.809 00:30:26.809 ' 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.809 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.069 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.070 10:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:35.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:35.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:35.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:35.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.214 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:30:35.215 00:30:35.215 --- 10.0.0.2 ping statistics --- 00:30:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.215 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:30:35.215 00:30:35.215 --- 10.0.0.1 ping statistics --- 00:30:35.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.215 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2235929 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2235929 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2235929 ']' 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.215 10:48:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.215 [2024-11-20 10:48:06.819243] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:35.215 [2024-11-20 10:48:06.819343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.215 [2024-11-20 10:48:06.920334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.215 [2024-11-20 10:48:06.972609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.215 [2024-11-20 10:48:06.972662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.215 [2024-11-20 10:48:06.972671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.215 [2024-11-20 10:48:06.972678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.215 [2024-11-20 10:48:06.972685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.215 [2024-11-20 10:48:06.974561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.215 [2024-11-20 10:48:06.974725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.215 [2024-11-20 10:48:06.974727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 [2024-11-20 10:48:07.700974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 Malloc0 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.476 [2024-11-20 10:48:07.775539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.476 { 00:30:35.476 "params": { 00:30:35.476 "name": "Nvme$subsystem", 00:30:35.476 "trtype": "$TEST_TRANSPORT", 00:30:35.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.476 "adrfam": "ipv4", 00:30:35.476 "trsvcid": "$NVMF_PORT", 00:30:35.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.476 "hdgst": ${hdgst:-false}, 00:30:35.476 "ddgst": ${ddgst:-false} 00:30:35.476 }, 00:30:35.476 "method": "bdev_nvme_attach_controller" 00:30:35.476 } 00:30:35.476 EOF 00:30:35.476 )") 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:35.476 10:48:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.476 "params": { 00:30:35.476 "name": "Nvme1", 00:30:35.476 "trtype": "tcp", 00:30:35.476 "traddr": "10.0.0.2", 00:30:35.476 "adrfam": "ipv4", 00:30:35.476 "trsvcid": "4420", 00:30:35.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.476 "hdgst": false, 00:30:35.476 "ddgst": false 00:30:35.476 }, 00:30:35.476 "method": "bdev_nvme_attach_controller" 00:30:35.476 }' 00:30:35.476 [2024-11-20 10:48:07.835052] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:35.476 [2024-11-20 10:48:07.835119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236111 ] 00:30:35.736 [2024-11-20 10:48:07.928912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.736 [2024-11-20 10:48:07.981870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.996 Running I/O for 1 seconds... 00:30:37.383 8561.00 IOPS, 33.44 MiB/s 00:30:37.383 Latency(us) 00:30:37.383 [2024-11-20T09:48:09.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:37.383 Verification LBA range: start 0x0 length 0x4000 00:30:37.383 Nvme1n1 : 1.01 8613.39 33.65 0.00 0.00 14799.77 2348.37 12997.97 00:30:37.383 [2024-11-20T09:48:09.759Z] =================================================================================================================== 00:30:37.383 [2024-11-20T09:48:09.759Z] Total : 8613.39 33.65 0.00 0.00 14799.77 2348.37 12997.97 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2236452 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:37.383 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:37.383 { 00:30:37.383 "params": { 00:30:37.383 "name": "Nvme$subsystem", 00:30:37.383 "trtype": "$TEST_TRANSPORT", 00:30:37.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.383 "adrfam": "ipv4", 00:30:37.383 "trsvcid": "$NVMF_PORT", 00:30:37.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.384 "hdgst": ${hdgst:-false}, 00:30:37.384 "ddgst": ${ddgst:-false} 00:30:37.384 }, 00:30:37.384 "method": "bdev_nvme_attach_controller" 00:30:37.384 } 00:30:37.384 EOF 00:30:37.384 )") 00:30:37.384 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:37.384 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:37.384 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:37.384 10:48:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:37.384 "params": { 00:30:37.384 "name": "Nvme1", 00:30:37.384 "trtype": "tcp", 00:30:37.384 "traddr": "10.0.0.2", 00:30:37.384 "adrfam": "ipv4", 00:30:37.384 "trsvcid": "4420", 00:30:37.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.384 "hdgst": false, 00:30:37.384 "ddgst": false 00:30:37.384 }, 00:30:37.384 "method": "bdev_nvme_attach_controller" 00:30:37.384 }' 00:30:37.384 [2024-11-20 10:48:09.520512] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:37.384 [2024-11-20 10:48:09.520589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236452 ] 00:30:37.384 [2024-11-20 10:48:09.615025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.384 [2024-11-20 10:48:09.666227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.645 Running I/O for 15 seconds... 00:30:39.599 10908.00 IOPS, 42.61 MiB/s [2024-11-20T09:48:12.547Z] 10972.50 IOPS, 42.86 MiB/s [2024-11-20T09:48:12.547Z] 10:48:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2235929 00:30:40.171 10:48:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:40.171 [2024-11-20 10:48:12.482439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.171 [2024-11-20 10:48:12.482484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.171 [2024-11-20 10:48:12.482502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.171 [2024-11-20 10:48:12.482512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.171 [2024-11-20 10:48:12.482529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.171 [2024-11-20 10:48:12.482539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.171 [2024-11-20 10:48:12.482550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.482992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.172 [2024-11-20 10:48:12.483245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.172 [2024-11-20 10:48:12.483255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.173 [2024-11-20 10:48:12.483669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.173 [2024-11-20 10:48:12.483686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.173 [2024-11-20 10:48:12.483859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.173 [2024-11-20 10:48:12.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.174 [2024-11-20 10:48:12.483961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.483994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.174 [2024-11-20 10:48:12.484539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.174 [2024-11-20 10:48:12.484546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.175 [2024-11-20 10:48:12.484831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.484840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73390 is same with the state(6) to be set 00:30:40.175 [2024-11-20 10:48:12.484850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.175 [2024-11-20 10:48:12.484856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.175 [2024-11-20 10:48:12.484864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:30:40.175 [2024-11-20 10:48:12.484872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.175 [2024-11-20 10:48:12.488518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.175 [2024-11-20 10:48:12.488572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.175 [2024-11-20 10:48:12.489379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-11-20 10:48:12.489418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.175 [2024-11-20 10:48:12.489431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.175 [2024-11-20 10:48:12.489674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.175 [2024-11-20 10:48:12.489896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.175 [2024-11-20 10:48:12.489906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.175 [2024-11-20 10:48:12.489915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.175 [2024-11-20 10:48:12.489923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.175 [2024-11-20 10:48:12.502710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.175 [2024-11-20 10:48:12.503431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-11-20 10:48:12.503471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.175 [2024-11-20 10:48:12.503482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.175 [2024-11-20 10:48:12.503718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.175 [2024-11-20 10:48:12.503939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.175 [2024-11-20 10:48:12.503949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.175 [2024-11-20 10:48:12.503956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.175 [2024-11-20 10:48:12.503965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.175 [2024-11-20 10:48:12.516532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.175 [2024-11-20 10:48:12.517187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-11-20 10:48:12.517228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.175 [2024-11-20 10:48:12.517240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.175 [2024-11-20 10:48:12.517477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.175 [2024-11-20 10:48:12.517699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.175 [2024-11-20 10:48:12.517708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.175 [2024-11-20 10:48:12.517717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.175 [2024-11-20 10:48:12.517729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.175 [2024-11-20 10:48:12.530311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.175 [2024-11-20 10:48:12.530973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-11-20 10:48:12.531016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.175 [2024-11-20 10:48:12.531027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.176 [2024-11-20 10:48:12.531273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.176 [2024-11-20 10:48:12.531495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.176 [2024-11-20 10:48:12.531505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.176 [2024-11-20 10:48:12.531513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.176 [2024-11-20 10:48:12.531521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.437 [2024-11-20 10:48:12.544085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.437 [2024-11-20 10:48:12.544738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.437 [2024-11-20 10:48:12.544782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.437 [2024-11-20 10:48:12.544793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.437 [2024-11-20 10:48:12.545031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.437 [2024-11-20 10:48:12.545261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.437 [2024-11-20 10:48:12.545273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.437 [2024-11-20 10:48:12.545281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.545289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.557853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.558541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.558553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.558792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.559013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.559023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.559031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.559040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.571626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.572264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.572313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.572326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.572567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.572789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.572799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.572807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.572816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.585399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.586055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.586105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.586117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.586370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.586593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.586603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.586611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.586620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.599197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.599836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.599888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.599900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.600143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.600379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.600391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.600399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.600408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.613027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.613679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.613733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.613745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.613997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.614234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.614246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.614255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.614264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.626873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.627601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.627659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.627672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.627919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.628143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.628154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.628176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.628185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.640786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.641505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.641571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.641585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.641838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.642064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.642075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.642085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.642094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.654696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.655377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.655443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.655457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.655709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.655935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.655955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.655964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.655973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.668637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.669115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.669149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.669171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.669400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.669622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.669635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.438 [2024-11-20 10:48:12.669644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.438 [2024-11-20 10:48:12.669655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.438 [2024-11-20 10:48:12.682465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.438 [2024-11-20 10:48:12.683096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.438 [2024-11-20 10:48:12.683124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.438 [2024-11-20 10:48:12.683133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.438 [2024-11-20 10:48:12.683363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.438 [2024-11-20 10:48:12.683584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.438 [2024-11-20 10:48:12.683595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.683604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.683613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.696391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.696999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.697025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.697034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.697370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.697593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.697604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.697612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.697627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.710207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.710816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.710843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.710852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.711071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.711302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.711315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.711324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.711332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.724103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.724752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.724818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.724831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.725084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.725326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.725338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.725347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.725357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.737973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.738608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.738670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.738685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.738940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.739183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.739195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.739204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.739214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.751841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.752566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.752633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.752647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.752900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.753126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.753138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.753146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.753171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.765810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.766521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.766588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.766603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.766857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.767082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.767095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.767105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.767115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.779730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.780464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.780531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.780544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.780797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.781023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.781035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.781044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.781053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.793650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.794378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.794444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.794458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.794717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.794943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.794956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.794964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.794974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.439 [2024-11-20 10:48:12.807603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.439 [2024-11-20 10:48:12.808292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.439 [2024-11-20 10:48:12.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.439 [2024-11-20 10:48:12.808370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.439 [2024-11-20 10:48:12.808624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.439 [2024-11-20 10:48:12.808850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.439 [2024-11-20 10:48:12.808862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.439 [2024-11-20 10:48:12.808871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.439 [2024-11-20 10:48:12.808881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.821498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.822088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.822184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.822437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.822663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.822674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.822683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.822693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.835300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.835978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.836044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.836057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.836327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.836554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.836573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.836582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.836591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.849195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.849915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.849981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.849995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.850263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.850491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.850503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.850511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.850521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.863106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.863783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.863849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.863862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.864115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.864357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.864372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.864381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.864390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.877048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.877780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.877845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.877858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.878111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.878353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.878366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.878375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.878399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.891071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.891772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.891838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.891852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.892105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.892348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.892361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.892369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.892379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.904969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.905606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.905638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.905648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.905869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.906088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.906098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.906106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.906117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.918904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.919515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.919543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.919552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.919772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.919992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.920002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.920010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.701 [2024-11-20 10:48:12.920020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.701 [2024-11-20 10:48:12.932856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.701 [2024-11-20 10:48:12.933461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.701 [2024-11-20 10:48:12.933528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.701 [2024-11-20 10:48:12.933541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.701 [2024-11-20 10:48:12.933794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.701 [2024-11-20 10:48:12.934020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.701 [2024-11-20 10:48:12.934031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.701 [2024-11-20 10:48:12.934040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:12.934050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:12.946659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:12.947348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:12.947414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:12.947427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:12.947680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:12.947905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:12.947917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:12.947927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:12.947936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:12.960544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:12.961248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:12.961314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:12.961328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:12.961581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:12.961806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:12.961817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:12.961826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:12.961836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 9241.33 IOPS, 36.10 MiB/s [2024-11-20T09:48:13.078Z] [2024-11-20 10:48:12.976146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:12.976856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:12.976923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:12.976936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:12.977213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:12.977439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:12.977451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:12.977460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:12.977469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:12.990056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:12.990757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:12.990824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:12.990838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:12.991091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:12.991333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:12.991346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:12.991354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:12.991364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:13.004022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:13.004498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:13.004532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:13.004542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:13.004765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:13.004987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:13.005000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:13.005008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:13.005017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:13.017865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:13.018439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:13.018469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:13.018478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:13.018697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:13.018917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:13.018939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:13.018947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:13.018956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:13.031793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:13.032501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:13.032568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:13.032581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:13.032834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:13.033060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:13.033072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:13.033081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:13.033090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:13.045714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:13.046393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:13.046460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:13.046473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:13.046725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:13.046951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:13.046963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:13.046971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:13.046981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.702 [2024-11-20 10:48:13.059597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.702 [2024-11-20 10:48:13.060302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.702 [2024-11-20 10:48:13.060369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.702 [2024-11-20 10:48:13.060382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.702 [2024-11-20 10:48:13.060635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.702 [2024-11-20 10:48:13.060862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.702 [2024-11-20 10:48:13.060874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.702 [2024-11-20 10:48:13.060882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.702 [2024-11-20 10:48:13.060899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.073527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.074220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.074286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.074299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.074551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.074778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.074790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.074799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.074809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.087487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.088203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.088268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.088282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.088535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.088761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.088773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.088782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.088792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.101429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.102111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.102193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.102208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.102460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.102687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.102698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.102707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.102717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.115323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.115955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.115985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.115994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.116223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.116445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.116457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.116465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.116474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.129287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.129897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.129925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.129934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.130154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.130382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.130393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.130402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.130411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.964 [2024-11-20 10:48:13.143203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.964 [2024-11-20 10:48:13.143802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.964 [2024-11-20 10:48:13.143829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.964 [2024-11-20 10:48:13.143838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.964 [2024-11-20 10:48:13.144057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.964 [2024-11-20 10:48:13.144284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.964 [2024-11-20 10:48:13.144298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.964 [2024-11-20 10:48:13.144306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.964 [2024-11-20 10:48:13.144314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.157116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.157776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.157842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.157856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.158117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.158361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.158374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.158383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.158393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.171010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.171726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.171792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.171807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.172061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.172301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.172314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.172323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.172332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.184962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.185669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.185736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.185750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.186002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.186239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.186252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.186260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.186270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.198887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.199637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.199705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.199718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.199971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.200214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.200235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.200244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.200253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.212854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.213568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.213633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.213646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.213899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.214125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.214136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.214146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.214155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.226778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.227467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.227533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.227547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.227800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.228026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.228037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.228046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.228055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.240659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.241266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.241333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.241346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.241599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.241826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.241838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.241848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.241867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.254479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.255175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.255242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.255258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.255512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.255739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.255752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.255762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.255773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.268360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.269031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.269097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.269110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.269376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.269605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.269617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.269626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.965 [2024-11-20 10:48:13.269636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.965 [2024-11-20 10:48:13.282122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.965 [2024-11-20 10:48:13.282808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.965 [2024-11-20 10:48:13.282873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.965 [2024-11-20 10:48:13.282886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.965 [2024-11-20 10:48:13.283139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.965 [2024-11-20 10:48:13.283380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.965 [2024-11-20 10:48:13.283394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.965 [2024-11-20 10:48:13.283403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.966 [2024-11-20 10:48:13.283413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.966 [2024-11-20 10:48:13.296055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.966 [2024-11-20 10:48:13.296726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.966 [2024-11-20 10:48:13.296792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.966 [2024-11-20 10:48:13.296806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.966 [2024-11-20 10:48:13.297058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.966 [2024-11-20 10:48:13.297301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.966 [2024-11-20 10:48:13.297325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.966 [2024-11-20 10:48:13.297333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.966 [2024-11-20 10:48:13.297343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.966 [2024-11-20 10:48:13.309940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.966 [2024-11-20 10:48:13.310642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.966 [2024-11-20 10:48:13.310708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.966 [2024-11-20 10:48:13.310721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.966 [2024-11-20 10:48:13.310974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.966 [2024-11-20 10:48:13.311216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.966 [2024-11-20 10:48:13.311228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.966 [2024-11-20 10:48:13.311237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.966 [2024-11-20 10:48:13.311247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.966 [2024-11-20 10:48:13.323241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.966 [2024-11-20 10:48:13.323766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.966 [2024-11-20 10:48:13.323794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:40.966 [2024-11-20 10:48:13.323801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:40.966 [2024-11-20 10:48:13.323954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:40.966 [2024-11-20 10:48:13.324107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.966 [2024-11-20 10:48:13.324115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.966 [2024-11-20 10:48:13.324121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.966 [2024-11-20 10:48:13.324127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.966 [2024-11-20 10:48:13.335898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.336427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.336454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.336461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.336620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.336775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.336783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.336790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.336796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.348544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.349155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.349214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.349224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.349402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.349558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.349566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.349573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.349581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.361180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.361784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.361832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.361842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.362016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.362184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.362193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.362199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.362206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.373799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.374387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.374430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.374439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.374610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.374774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.374788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.374794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.374801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.386393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.386996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.387037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.387046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.387226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.387381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.387389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.387395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.387403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.398986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.399601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.399640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.228 [2024-11-20 10:48:13.399648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.228 [2024-11-20 10:48:13.399817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.228 [2024-11-20 10:48:13.399971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.228 [2024-11-20 10:48:13.399978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.228 [2024-11-20 10:48:13.399985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.228 [2024-11-20 10:48:13.399991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.228 [2024-11-20 10:48:13.411596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.228 [2024-11-20 10:48:13.412223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.228 [2024-11-20 10:48:13.412261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.412270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.412439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.412592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.412600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.412606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.412617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.424197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.424774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.424810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.424818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.424985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.425137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.425144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.425150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.425155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.436887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.437443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.437478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.437486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.437652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.437805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.437813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.437818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.437825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.449547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.450145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.450185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.450193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.450359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.450511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.450518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.450524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.450530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.462249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.462843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.462876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.462884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.463049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.463210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.463218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.463224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.463230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.474958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.475435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.475453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.475459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.475609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.475758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.475765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.475770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.475775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.487643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.488083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.488097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.488103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.488257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.488406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.488413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.488418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.488423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.500312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.500730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.500744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.500750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.500903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.229 [2024-11-20 10:48:13.501053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.229 [2024-11-20 10:48:13.501060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.229 [2024-11-20 10:48:13.501066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.229 [2024-11-20 10:48:13.501071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.229 [2024-11-20 10:48:13.512938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.229 [2024-11-20 10:48:13.513433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.229 [2024-11-20 10:48:13.513447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.229 [2024-11-20 10:48:13.513454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.229 [2024-11-20 10:48:13.513603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.513752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.513759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.513765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.513770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.525576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.526028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.526043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.526049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.526209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.526360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.526367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.526373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.526378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.538244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.538690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.538704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.538710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.538859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.539009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.539024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.539031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.539036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.550899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.551368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.551382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.551388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.551536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.551686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.551692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.551698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.551703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.563565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.564027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.564032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.564186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.564336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.564342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.564347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.564352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.576250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.576838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.576878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.577042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.577202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.577210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.577216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.577226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.230 [2024-11-20 10:48:13.588953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.230 [2024-11-20 10:48:13.589415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.230 [2024-11-20 10:48:13.589432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.230 [2024-11-20 10:48:13.589438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.230 [2024-11-20 10:48:13.589588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.230 [2024-11-20 10:48:13.589737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.230 [2024-11-20 10:48:13.589744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.230 [2024-11-20 10:48:13.589750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.230 [2024-11-20 10:48:13.589755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.491 [2024-11-20 10:48:13.601626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.491 [2024-11-20 10:48:13.602118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.491 [2024-11-20 10:48:13.602133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.491 [2024-11-20 10:48:13.602138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.491 [2024-11-20 10:48:13.602292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.491 [2024-11-20 10:48:13.602441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.491 [2024-11-20 10:48:13.602449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.491 [2024-11-20 10:48:13.602454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.491 [2024-11-20 10:48:13.602459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.491 [2024-11-20 10:48:13.614331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.491 [2024-11-20 10:48:13.614921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.491 [2024-11-20 10:48:13.614953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.491 [2024-11-20 10:48:13.614962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.491 [2024-11-20 10:48:13.615126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.491 [2024-11-20 10:48:13.615286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.491 [2024-11-20 10:48:13.615294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.491 [2024-11-20 10:48:13.615300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.491 [2024-11-20 10:48:13.615306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.491 [2024-11-20 10:48:13.627053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.491 [2024-11-20 10:48:13.627528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.491 [2024-11-20 10:48:13.627545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.491 [2024-11-20 10:48:13.627550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.627700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.627849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.627856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.627861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.627866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.639740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.640498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.640519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.640525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.640680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.640831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.640839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.640844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.640849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.652451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.652931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.652952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.653101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.653256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.653264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.653270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.653275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.665037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.665491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.665505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.665511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.665663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.665812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.665819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.665825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.665830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.677706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.678146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.678162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.678169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.678319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.678468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.678474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.678480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.678484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.690348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.690832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.690845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.690850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.690999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.691148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.691155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.691166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.691172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.703036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.703542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.703556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.703562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.703710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.703860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.703870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.703875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.703880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.715623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.716106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.716120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.716126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.716279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.716429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.716436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.716441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.716446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.728322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.728771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.728803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.728812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.728978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.729131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.729137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.729143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.729149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.741024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.741505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.741522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.492 [2024-11-20 10:48:13.741528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.492 [2024-11-20 10:48:13.741677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.492 [2024-11-20 10:48:13.741827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.492 [2024-11-20 10:48:13.741833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.492 [2024-11-20 10:48:13.741838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.492 [2024-11-20 10:48:13.741847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.492 [2024-11-20 10:48:13.753715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.492 [2024-11-20 10:48:13.754194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.492 [2024-11-20 10:48:13.754227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.754236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.754402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.754555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.754562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.754568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.754574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.766301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.766815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.766822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.766971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.767122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.767128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.767133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.767139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.779003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.779558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.779590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.779599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.779764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.779917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.779924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.779930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.779936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.791663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.792256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.792288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.792297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.792464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.792617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.792624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.792630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.792636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.804365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.804985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.805017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.805025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.805194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.805347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.805354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.805360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.805365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.816942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.817522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.817554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.817563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.817728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.817880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.817887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.817893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.817899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.829637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.830241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.830273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.830282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.830452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.830605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.830612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.830618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.830624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.842347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.842947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.842978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.842988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.843153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.843311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.843319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.843325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.843331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.493 [2024-11-20 10:48:13.855044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.493 [2024-11-20 10:48:13.855619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.493 [2024-11-20 10:48:13.855651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.493 [2024-11-20 10:48:13.855660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.493 [2024-11-20 10:48:13.855824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.493 [2024-11-20 10:48:13.855977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.493 [2024-11-20 10:48:13.855983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.493 [2024-11-20 10:48:13.855989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.493 [2024-11-20 10:48:13.855995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.867719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.868184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.868200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.868206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.868356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.868505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.868516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.868521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.868526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.880390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.881008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.881040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.881049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.881220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.881374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.881381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.881387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.881393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.892967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.893461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.893501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.893665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.893817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.893824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.893830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.893835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.905561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.906150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.906187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.906196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.906361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.906513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.906520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.906525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.906535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.918141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.918628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.918644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.918650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.918799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.918949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.918956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.918961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.918967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.930832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.931251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.931283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.931292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.931459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.931611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.931618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.931624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.931630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.943496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.943954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.755 [2024-11-20 10:48:13.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.755 [2024-11-20 10:48:13.943976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.755 [2024-11-20 10:48:13.944125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.755 [2024-11-20 10:48:13.944280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.755 [2024-11-20 10:48:13.944288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.755 [2024-11-20 10:48:13.944293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.755 [2024-11-20 10:48:13.944298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.755 [2024-11-20 10:48:13.956147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.755 [2024-11-20 10:48:13.956694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:13.956725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:13.956734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:13.956898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:13.957050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:13.957058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:13.957064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:13.957070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:13.968794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:13.969310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:13.969342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:13.969351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:13.969518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:13.969670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:13.969677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:13.969683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:13.969689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 6931.00 IOPS, 27.07 MiB/s [2024-11-20T09:48:14.132Z] [2024-11-20 10:48:13.981448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:13.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:13.981964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:13.981970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:13.982119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:13.982274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:13.982281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:13.982286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:13.982292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:13.994141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:13.994611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:13.994625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:13.994630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:13.994782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:13.994931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:13.994938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:13.994943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:13.994948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.006801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.007290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.007322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.007330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.007498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.007650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.007657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.007663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.007669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.019396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.019963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.019995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.020004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.020172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.020325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.020332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.020338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.020344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.032073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.032642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.032674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.032683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.032847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.033000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.033011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.033017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.033023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.044754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.045330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.045497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.045649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.045657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.045663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.045669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.057398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.057896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.057919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.058068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.058222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.058229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.058234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.058240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.756 [2024-11-20 10:48:14.070094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.756 [2024-11-20 10:48:14.070485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.756 [2024-11-20 10:48:14.070517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.756 [2024-11-20 10:48:14.070526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.756 [2024-11-20 10:48:14.070692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.756 [2024-11-20 10:48:14.070845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.756 [2024-11-20 10:48:14.070852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.756 [2024-11-20 10:48:14.070857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.756 [2024-11-20 10:48:14.070867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.757 [2024-11-20 10:48:14.082739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.757 [2024-11-20 10:48:14.083250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.757 [2024-11-20 10:48:14.083266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.757 [2024-11-20 10:48:14.083272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.757 [2024-11-20 10:48:14.083421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.757 [2024-11-20 10:48:14.083571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.757 [2024-11-20 10:48:14.083578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.757 [2024-11-20 10:48:14.083583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.757 [2024-11-20 10:48:14.083588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.757 [2024-11-20 10:48:14.095444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.757 [2024-11-20 10:48:14.095783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.757 [2024-11-20 10:48:14.095798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.757 [2024-11-20 10:48:14.095804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.757 [2024-11-20 10:48:14.095953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.757 [2024-11-20 10:48:14.096102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.757 [2024-11-20 10:48:14.096109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.757 [2024-11-20 10:48:14.096115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.757 [2024-11-20 10:48:14.096120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.757 [2024-11-20 10:48:14.108113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.757 [2024-11-20 10:48:14.108573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.757 [2024-11-20 10:48:14.108587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.757 [2024-11-20 10:48:14.108592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.757 [2024-11-20 10:48:14.108741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.757 [2024-11-20 10:48:14.108890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.757 [2024-11-20 10:48:14.108897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.757 [2024-11-20 10:48:14.108902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.757 [2024-11-20 10:48:14.108906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.757 [2024-11-20 10:48:14.120751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.757 [2024-11-20 10:48:14.121418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.757 [2024-11-20 10:48:14.121450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:41.757 [2024-11-20 10:48:14.121459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:41.757 [2024-11-20 10:48:14.121624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:41.757 [2024-11-20 10:48:14.121776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.757 [2024-11-20 10:48:14.121783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.757 [2024-11-20 10:48:14.121789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.757 [2024-11-20 10:48:14.121795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.133392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.133892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.133908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.133914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.134063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.134219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.134226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.134231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.134236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.146092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.146582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.146596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.146601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.146750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.146900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.146906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.146912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.146917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.158765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.159397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.159429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.159438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.159607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.159760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.159768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.159774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.159781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.171433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.172028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.172060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.172068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.172238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.172392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.172398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.172404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.172410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.184140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.184709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.184741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.184750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.184915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.185067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.185075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.185080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.185086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.196811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.197389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.197421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.197431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.197595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.197748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.197758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.197764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.197770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.209500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.210107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.210116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.210290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.210442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.210450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.210456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.210463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.222184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.222755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.222787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.222796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.019 [2024-11-20 10:48:14.222960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.019 [2024-11-20 10:48:14.223113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.019 [2024-11-20 10:48:14.223120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.019 [2024-11-20 10:48:14.223125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.019 [2024-11-20 10:48:14.223131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.019 [2024-11-20 10:48:14.234863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.019 [2024-11-20 10:48:14.235373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.019 [2024-11-20 10:48:14.235390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.019 [2024-11-20 10:48:14.235396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.235545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.235695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.235701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.235706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.235718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.247575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.248058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.248064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.248216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.248367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.248373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.248378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.248386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.260239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.260821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.260852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.260862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.261027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.261185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.261193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.261200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.261206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.272922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.273372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.273389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.273395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.273544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.273694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.273700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.273706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.273711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.285572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.286063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.286078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.286084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.286238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.286388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.286394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.286400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.286405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.298250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.298702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.298715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.298720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.298869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.299019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.299025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.299031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.299036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.310881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.311435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.311466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.311475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.311639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.311791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.311798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.311804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.311810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.323527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.323985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.324000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.324006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.324165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.324315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.324322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.324327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.324332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.336209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.336796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.336828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.336837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.337001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.337154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.337168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.337173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.337179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.348894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.349459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.349491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.020 [2024-11-20 10:48:14.349500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.020 [2024-11-20 10:48:14.349665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.020 [2024-11-20 10:48:14.349817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.020 [2024-11-20 10:48:14.349824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.020 [2024-11-20 10:48:14.349830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.020 [2024-11-20 10:48:14.349837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.020 [2024-11-20 10:48:14.361557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.020 [2024-11-20 10:48:14.362004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.020 [2024-11-20 10:48:14.362020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.021 [2024-11-20 10:48:14.362026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.021 [2024-11-20 10:48:14.362209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.021 [2024-11-20 10:48:14.362360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.021 [2024-11-20 10:48:14.362371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.021 [2024-11-20 10:48:14.362376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.021 [2024-11-20 10:48:14.362381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.021 [2024-11-20 10:48:14.374226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.021 [2024-11-20 10:48:14.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.021 [2024-11-20 10:48:14.374843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.021 [2024-11-20 10:48:14.374852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.021 [2024-11-20 10:48:14.375017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.021 [2024-11-20 10:48:14.375178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.021 [2024-11-20 10:48:14.375186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.021 [2024-11-20 10:48:14.375191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.021 [2024-11-20 10:48:14.375197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.021 [2024-11-20 10:48:14.386915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.021 [2024-11-20 10:48:14.387473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.021 [2024-11-20 10:48:14.387504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.021 [2024-11-20 10:48:14.387513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.021 [2024-11-20 10:48:14.387678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.021 [2024-11-20 10:48:14.387830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.021 [2024-11-20 10:48:14.387837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.021 [2024-11-20 10:48:14.387842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.021 [2024-11-20 10:48:14.387848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.282 [2024-11-20 10:48:14.399571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.282 [2024-11-20 10:48:14.400041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.282 [2024-11-20 10:48:14.400057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.282 [2024-11-20 10:48:14.400064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.282 [2024-11-20 10:48:14.400219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.282 [2024-11-20 10:48:14.400369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.282 [2024-11-20 10:48:14.400375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.282 [2024-11-20 10:48:14.400381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.282 [2024-11-20 10:48:14.400389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.282 [2024-11-20 10:48:14.412234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.282 [2024-11-20 10:48:14.412814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.282 [2024-11-20 10:48:14.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.282 [2024-11-20 10:48:14.412855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.282 [2024-11-20 10:48:14.413020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.282 [2024-11-20 10:48:14.413180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.282 [2024-11-20 10:48:14.413188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.282 [2024-11-20 10:48:14.413193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.282 [2024-11-20 10:48:14.413199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.282 [2024-11-20 10:48:14.424909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.282 [2024-11-20 10:48:14.425443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.282 [2024-11-20 10:48:14.425475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.282 [2024-11-20 10:48:14.425484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.282 [2024-11-20 10:48:14.425648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.282 [2024-11-20 10:48:14.425801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.282 [2024-11-20 10:48:14.425808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.282 [2024-11-20 10:48:14.425813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.282 [2024-11-20 10:48:14.425819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.282 [2024-11-20 10:48:14.437546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.282 [2024-11-20 10:48:14.438136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.282 [2024-11-20 10:48:14.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.282 [2024-11-20 10:48:14.438181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.282 [2024-11-20 10:48:14.438346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.282 [2024-11-20 10:48:14.438498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.282 [2024-11-20 10:48:14.438504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.282 [2024-11-20 10:48:14.438511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.282 [2024-11-20 10:48:14.438517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.282 [2024-11-20 10:48:14.450234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.282 [2024-11-20 10:48:14.450807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.282 [2024-11-20 10:48:14.450839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.282 [2024-11-20 10:48:14.450848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.282 [2024-11-20 10:48:14.451013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.282 [2024-11-20 10:48:14.451172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.282 [2024-11-20 10:48:14.451180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.451185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.451191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.462901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.463459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.463491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.463500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.463664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.463816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.463823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.463829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.463835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.475553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.476121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.476153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.476169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.476335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.476488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.476495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.476501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.476507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.488228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.488801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.488834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.488843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.489012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.489173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.489184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.489189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.489195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.500905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.501465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.501497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.501506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.501671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.501823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.501830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.501836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.501842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.513558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.514012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.514028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.514034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.514188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.514339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.514346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.514352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.514358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.526210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.526688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.526702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.526708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.526858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.527007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.527017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.527023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.527028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.538907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.539446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.539479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.539487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.539652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.539805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.539811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.539817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.539823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.551633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.552233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.552265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.552274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.552440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.552593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.552600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.552606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.552612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.564335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.564957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.564966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.283 [2024-11-20 10:48:14.565130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.283 [2024-11-20 10:48:14.565290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.283 [2024-11-20 10:48:14.565297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.283 [2024-11-20 10:48:14.565303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.283 [2024-11-20 10:48:14.565312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.283 [2024-11-20 10:48:14.577026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.283 [2024-11-20 10:48:14.577603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.283 [2024-11-20 10:48:14.577635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.283 [2024-11-20 10:48:14.577643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.577808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.577960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.577966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.577972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.577978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.589699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.590281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.590314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.590322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.590489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.590641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.590648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.590653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.590659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.602375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.602985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.603017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.603026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.603198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.603351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.603358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.603364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.603370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.615079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.615668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.615700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.615709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.615873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.616026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.616033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.616039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.616044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.627759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.628289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.628320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.628329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.628496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.628648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.628655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.628661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.628667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.640393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.640735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.640752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.640758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.640908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.641058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.641065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.641070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.641075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.284 [2024-11-20 10:48:14.653067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.284 [2024-11-20 10:48:14.653554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.284 [2024-11-20 10:48:14.653567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.284 [2024-11-20 10:48:14.653573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.284 [2024-11-20 10:48:14.653725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.284 [2024-11-20 10:48:14.653875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.284 [2024-11-20 10:48:14.653881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.284 [2024-11-20 10:48:14.653886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.284 [2024-11-20 10:48:14.653891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.545 [2024-11-20 10:48:14.665740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.545 [2024-11-20 10:48:14.666085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.545 [2024-11-20 10:48:14.666100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.545 [2024-11-20 10:48:14.666106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.666261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.666411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.666418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.666424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.666429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.678414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.678897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.678910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.678916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.679064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.679226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.679234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.679240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.679244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.691055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.691644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.691676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.691685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.691849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.692001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.692012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.692018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.692025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.703743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.704381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.704412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.704421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.704586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.704738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.704744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.704750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.704756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.716332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.716920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.716952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.716960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.717125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.717284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.717292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.717298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.717303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.729016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.729606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.729639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.729648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.729812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.729964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.729972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.729977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.729987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.741718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.742197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.742220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.742226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.742381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.742531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.742538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.742543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.742548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.754420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.755011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.755042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.755051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.755222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.755375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.755382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.755388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.755394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.767105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.767708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.546 [2024-11-20 10:48:14.767740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.546 [2024-11-20 10:48:14.767749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.546 [2024-11-20 10:48:14.767915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.546 [2024-11-20 10:48:14.768068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.546 [2024-11-20 10:48:14.768075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.546 [2024-11-20 10:48:14.768080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.546 [2024-11-20 10:48:14.768086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.546 [2024-11-20 10:48:14.779819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.546 [2024-11-20 10:48:14.780461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.780494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.780503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.780670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.780823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.780831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.780837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.780843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.792422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.792917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.792933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.792939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.793088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.793244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.793252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.793257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.793262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.805107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.805716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.805747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.805756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.805920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.806073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.806079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.806085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.806092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.817732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.818259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.818291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.818300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.818471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.818623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.818630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.818636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.818642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.830365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.830913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.830945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.830954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.831118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.831283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.831291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.831296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.831302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.843013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.843606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.843638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.843646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.843810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.843963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.843970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.843975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.843981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.855697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.856193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.856224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.856233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.856397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.856549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.856562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.856568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.856575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.868296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.868877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.868908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.868917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.869081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.869241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.869250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.869255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.869261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.880979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.881573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.881605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.547 [2024-11-20 10:48:14.881614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.547 [2024-11-20 10:48:14.881778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.547 [2024-11-20 10:48:14.881930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.547 [2024-11-20 10:48:14.881937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.547 [2024-11-20 10:48:14.881942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.547 [2024-11-20 10:48:14.881948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.547 [2024-11-20 10:48:14.893669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.547 [2024-11-20 10:48:14.894220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.547 [2024-11-20 10:48:14.894252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.548 [2024-11-20 10:48:14.894261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.548 [2024-11-20 10:48:14.894426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.548 [2024-11-20 10:48:14.894579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.548 [2024-11-20 10:48:14.894586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.548 [2024-11-20 10:48:14.894591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.548 [2024-11-20 10:48:14.894601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.548 [2024-11-20 10:48:14.906322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.548 [2024-11-20 10:48:14.906916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.548 [2024-11-20 10:48:14.906948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.548 [2024-11-20 10:48:14.906956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.548 [2024-11-20 10:48:14.907121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.548 [2024-11-20 10:48:14.907281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.548 [2024-11-20 10:48:14.907288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.548 [2024-11-20 10:48:14.907294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.548 [2024-11-20 10:48:14.907300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.810 [2024-11-20 10:48:14.919016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.810 [2024-11-20 10:48:14.919507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.810 [2024-11-20 10:48:14.919523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.810 [2024-11-20 10:48:14.919529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.810 [2024-11-20 10:48:14.919678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.810 [2024-11-20 10:48:14.919828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.810 [2024-11-20 10:48:14.919834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.810 [2024-11-20 10:48:14.919840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.810 [2024-11-20 10:48:14.919844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.810 [2024-11-20 10:48:14.931691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.810 [2024-11-20 10:48:14.932293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.810 [2024-11-20 10:48:14.932325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.810 [2024-11-20 10:48:14.932334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.810 [2024-11-20 10:48:14.932499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.810 [2024-11-20 10:48:14.932651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.810 [2024-11-20 10:48:14.932658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.810 [2024-11-20 10:48:14.932663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.810 [2024-11-20 10:48:14.932670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.810 [2024-11-20 10:48:14.944278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.810 [2024-11-20 10:48:14.944878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.810 [2024-11-20 10:48:14.944910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.810 [2024-11-20 10:48:14.944919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.810 [2024-11-20 10:48:14.945083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.810 [2024-11-20 10:48:14.945242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.810 [2024-11-20 10:48:14.945249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.810 [2024-11-20 10:48:14.945255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.810 [2024-11-20 10:48:14.945261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.810 [2024-11-20 10:48:14.956994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.810 [2024-11-20 10:48:14.957599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.810 [2024-11-20 10:48:14.957631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:14.957639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:14.957804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:14.957956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:14.957963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:14.957969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:14.957975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:14.969695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:14.970155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:14.970178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:14.970183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:14.970332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:14.970482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:14.970489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:14.970494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:14.970500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 5544.80 IOPS, 21.66 MiB/s [2024-11-20T09:48:15.187Z] [2024-11-20 10:48:14.982377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:14.982865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:14.982880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:14.982889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:14.983038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:14.983193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:14.983199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:14.983205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:14.983210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:14.995052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:14.995506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:14.995519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:14.995525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:14.995674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:14.995823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:14.995830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:14.995835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:14.995841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.007684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.008166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.008179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.008185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.008333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.008482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.008489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.008494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.008499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.020340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.020693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.020706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.020712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.020860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.021010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.021020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.021025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.021030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.033016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.033446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.033458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.033464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.033613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.033762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.033769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.033774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.033780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.045633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.046118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.046131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.046137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.046291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.046441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.046448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.046453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.046458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.058307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.058860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.058892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.058901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.059065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.059225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.059233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.059238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.059248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.070966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.071507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.811 [2024-11-20 10:48:15.071539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.811 [2024-11-20 10:48:15.071548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.811 [2024-11-20 10:48:15.071712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.811 [2024-11-20 10:48:15.071865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.811 [2024-11-20 10:48:15.071872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.811 [2024-11-20 10:48:15.071877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.811 [2024-11-20 10:48:15.071885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.811 [2024-11-20 10:48:15.083620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.811 [2024-11-20 10:48:15.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.084272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.084281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.084447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.084599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.084607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.084612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.084618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.096200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.096789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.096820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.096829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.096994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.097146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.097153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.097166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.097172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.108885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.109526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.109567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.109732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.109884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.109891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.109897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.109903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.121476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.122028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.122060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.122069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.122240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.122393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.122400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.122406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.122411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.134119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.134675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.134707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.134716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.134880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.135032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.135039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.135045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.135051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.146775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.147282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.147314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.147326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.147493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.147645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.147653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.147658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.147664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.159383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.159870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.159886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.159892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.160041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.160195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.160202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.160208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.160212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.812 [2024-11-20 10:48:15.172082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.812 [2024-11-20 10:48:15.172578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.812 [2024-11-20 10:48:15.172593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:42.812 [2024-11-20 10:48:15.172599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:42.812 [2024-11-20 10:48:15.172747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:42.812 [2024-11-20 10:48:15.172897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.812 [2024-11-20 10:48:15.172904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.812 [2024-11-20 10:48:15.172909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.812 [2024-11-20 10:48:15.172914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.074 [2024-11-20 10:48:15.184777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.074 [2024-11-20 10:48:15.185304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.074 [2024-11-20 10:48:15.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.074 [2024-11-20 10:48:15.185345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.074 [2024-11-20 10:48:15.185511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.074 [2024-11-20 10:48:15.185664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.074 [2024-11-20 10:48:15.185674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.074 [2024-11-20 10:48:15.185681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.074 [2024-11-20 10:48:15.185687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.074 [2024-11-20 10:48:15.197408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.074 [2024-11-20 10:48:15.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.074 [2024-11-20 10:48:15.198011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.074 [2024-11-20 10:48:15.198019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.074 [2024-11-20 10:48:15.198191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.074 [2024-11-20 10:48:15.198345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.074 [2024-11-20 10:48:15.198352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.074 [2024-11-20 10:48:15.198358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.074 [2024-11-20 10:48:15.198364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.074 [2024-11-20 10:48:15.210089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.074 [2024-11-20 10:48:15.210519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.074 [2024-11-20 10:48:15.210536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.074 [2024-11-20 10:48:15.210542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.074 [2024-11-20 10:48:15.210691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.074 [2024-11-20 10:48:15.210841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.074 [2024-11-20 10:48:15.210847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.074 [2024-11-20 10:48:15.210852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.074 [2024-11-20 10:48:15.210858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.074 [2024-11-20 10:48:15.222717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.074 [2024-11-20 10:48:15.223155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.074 [2024-11-20 10:48:15.223174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.074 [2024-11-20 10:48:15.223180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.074 [2024-11-20 10:48:15.223328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.074 [2024-11-20 10:48:15.223477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.223484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.223489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.223498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.235363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.235897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.235937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.236102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.236262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.236271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.236276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.236282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.247999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.248549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.248580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.248589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.248756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.248908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.248916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.248922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.248929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.260667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.261131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.261147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.261153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.261309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.261460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.261467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.261472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.261477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.273353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.273881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.273895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.273901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.274051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.274207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.274214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.274219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.274224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.285961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.286534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.286566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.286576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.286743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.286896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.286903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.286910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.286916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.298658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.299285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.299317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.299326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.299493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.299646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.299653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.299658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.299665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.311244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.311716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.311747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.311759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.311933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.312085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.312091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.312097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.312103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.323834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.324167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.324184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.324189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.324338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.324488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.324494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.324500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.324504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.336514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.336954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.336969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.336974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.337123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.075 [2024-11-20 10:48:15.337280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.075 [2024-11-20 10:48:15.337287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.075 [2024-11-20 10:48:15.337293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.075 [2024-11-20 10:48:15.337298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.075 [2024-11-20 10:48:15.349154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.075 [2024-11-20 10:48:15.349711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.075 [2024-11-20 10:48:15.349743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.075 [2024-11-20 10:48:15.349752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.075 [2024-11-20 10:48:15.349916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.350069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.350079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.350085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.350091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.361822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.362433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.362465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.362474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.362638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.362791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.362798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.362803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.362809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.374410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.374910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.374941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.374950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.375117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.375275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.375282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.375288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.375294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.387023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.387529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.387545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.387551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.387700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.387850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.387857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.387862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.387871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.399726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.400208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.400229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.400235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.400388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.400538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.400546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.400551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.400556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.412419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.412896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.412911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.412916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.413065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.413218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.413225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.413231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.413236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.425093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.425451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.425457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.425606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.425756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.425762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.425768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.425773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.076 [2024-11-20 10:48:15.437794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.076 [2024-11-20 10:48:15.438243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.076 [2024-11-20 10:48:15.438257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.076 [2024-11-20 10:48:15.438262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.076 [2024-11-20 10:48:15.438411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.076 [2024-11-20 10:48:15.438561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.076 [2024-11-20 10:48:15.438568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.076 [2024-11-20 10:48:15.438573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.076 [2024-11-20 10:48:15.438579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.338 [2024-11-20 10:48:15.450484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.338 [2024-11-20 10:48:15.450841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.338 [2024-11-20 10:48:15.450855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.338 [2024-11-20 10:48:15.450861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.338 [2024-11-20 10:48:15.451010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.338 [2024-11-20 10:48:15.451164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.338 [2024-11-20 10:48:15.451172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.338 [2024-11-20 10:48:15.451177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.338 [2024-11-20 10:48:15.451182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.338 [2024-11-20 10:48:15.463190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.338 [2024-11-20 10:48:15.463632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.338 [2024-11-20 10:48:15.463646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.338 [2024-11-20 10:48:15.463651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.338 [2024-11-20 10:48:15.463799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.338 [2024-11-20 10:48:15.463949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.338 [2024-11-20 10:48:15.463956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.338 [2024-11-20 10:48:15.463961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.338 [2024-11-20 10:48:15.463966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.338 [2024-11-20 10:48:15.475831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.338 [2024-11-20 10:48:15.476282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.338 [2024-11-20 10:48:15.476296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.338 [2024-11-20 10:48:15.476302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.338 [2024-11-20 10:48:15.476458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.338 [2024-11-20 10:48:15.476607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.338 [2024-11-20 10:48:15.476614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.338 [2024-11-20 10:48:15.476619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.338 [2024-11-20 10:48:15.476624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2235929 Killed "${NVMF_APP[@]}" "$@" 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2238032 00:30:43.338 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2238032 00:30:43.338 [2024-11-20 10:48:15.488500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2238032 ']' 00:30:43.339 [2024-11-20 10:48:15.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.489001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.489007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.339 [2024-11-20 10:48:15.489156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.489315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.489321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.489328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.489334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.339 10:48:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.339 [2024-11-20 10:48:15.501204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.501692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.501706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.501716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.501866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.502015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.502022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.502028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.502033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.513903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.514240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.514255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.514260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.514409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.514559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.514566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.514571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.514577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.526587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.527038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.527051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.527057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.527212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.527362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.527369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.527374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.527378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.539249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.539740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.539753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.539759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.539907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.540061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.540068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.540073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.540078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.551949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.552279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.552294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.552300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.552448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.552598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.552604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.552610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.552615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.553236] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:30:43.339 [2024-11-20 10:48:15.553283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.339 [2024-11-20 10:48:15.564623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.565117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.565131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.565137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.565291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.565441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.565449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.565454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.565459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.577358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.339 [2024-11-20 10:48:15.577827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.339 [2024-11-20 10:48:15.577860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.339 [2024-11-20 10:48:15.577869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.339 [2024-11-20 10:48:15.578042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.339 [2024-11-20 10:48:15.578200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.339 [2024-11-20 10:48:15.578208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.339 [2024-11-20 10:48:15.578214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.339 [2024-11-20 10:48:15.578220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.339 [2024-11-20 10:48:15.590040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.590488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.590519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.590528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.590693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.590845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.590853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.590859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.590865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.602737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.603297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.603329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.603338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.603505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.603658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.603664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.603670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.603676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.615413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.615893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.615910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.615916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.616065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.616219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.616230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.616236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.616241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.628096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.628554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.628569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.628575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.628724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.628873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.628879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.628885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.628889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.640754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.641260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.641274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.641279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.641428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.641577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.641584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.641589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.641594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.643178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.340 [2024-11-20 10:48:15.653357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.653954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.653987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.653996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.654170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.654324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.654331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.654337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.654347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.666070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.666548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.666564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.666570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.666719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.666869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.666876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.666881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.666886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.672294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.340 [2024-11-20 10:48:15.672315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.340 [2024-11-20 10:48:15.672322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.340 [2024-11-20 10:48:15.672327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.340 [2024-11-20 10:48:15.672332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.340 [2024-11-20 10:48:15.673461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.340 [2024-11-20 10:48:15.673610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.340 [2024-11-20 10:48:15.673613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.340 [2024-11-20 10:48:15.678759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.679289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.679323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.679332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.679503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.679656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.679663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.679669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.679675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.691428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.691916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.340 [2024-11-20 10:48:15.691933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.340 [2024-11-20 10:48:15.691938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.340 [2024-11-20 10:48:15.692093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.340 [2024-11-20 10:48:15.692249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.340 [2024-11-20 10:48:15.692256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.340 [2024-11-20 10:48:15.692261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.340 [2024-11-20 10:48:15.692268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.340 [2024-11-20 10:48:15.704135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.340 [2024-11-20 10:48:15.704507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.341 [2024-11-20 10:48:15.704524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.341 [2024-11-20 10:48:15.704529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.341 [2024-11-20 10:48:15.704679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.341 [2024-11-20 10:48:15.704830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.341 [2024-11-20 10:48:15.704836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.341 [2024-11-20 10:48:15.704842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.341 [2024-11-20 10:48:15.704847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.602 [2024-11-20 10:48:15.716715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.602 [2024-11-20 10:48:15.717374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.602 [2024-11-20 10:48:15.717410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.602 [2024-11-20 10:48:15.717419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.602 [2024-11-20 10:48:15.717587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.602 [2024-11-20 10:48:15.717740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.602 [2024-11-20 10:48:15.717747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.602 [2024-11-20 10:48:15.717753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.602 [2024-11-20 10:48:15.717759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.602 [2024-11-20 10:48:15.729349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.602 [2024-11-20 10:48:15.729939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.602 [2024-11-20 10:48:15.729971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.602 [2024-11-20 10:48:15.729980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.602 [2024-11-20 10:48:15.730147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.602 [2024-11-20 10:48:15.730306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.602 [2024-11-20 10:48:15.730319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.602 [2024-11-20 10:48:15.730324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.602 [2024-11-20 10:48:15.730330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.741924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.742448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.742481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.742489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.742654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.742806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.742813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.742819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.742825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.754553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.755020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.755042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.755196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.755347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.755354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.755359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.755364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.767222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.767796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.767827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.767836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.768001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.768153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.768166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.768173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.768182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.779906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.780334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.780352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.780358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.780508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.780657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.780664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.780669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.780674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.792595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.793103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.793119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.793125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.793278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.793428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.793435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.793441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.793446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.805307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.805819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.805833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.805839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.805988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.806137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.806144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.806149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.806154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.818009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.818477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.818513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.818521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.818686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.818839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.818846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.818851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.818857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.830585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.831196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.831228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.831237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.831403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.831555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.831563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.831568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.831574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.843174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.843714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.843745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.843755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.843920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.844073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.844080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.844086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.603 [2024-11-20 10:48:15.844092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.603 [2024-11-20 10:48:15.855820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.603 [2024-11-20 10:48:15.856303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.603 [2024-11-20 10:48:15.856321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.603 [2024-11-20 10:48:15.856327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.603 [2024-11-20 10:48:15.856480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.603 [2024-11-20 10:48:15.856630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.603 [2024-11-20 10:48:15.856636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.603 [2024-11-20 10:48:15.856641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.856646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.868505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.868999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.869013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.869019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.869172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.869322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.869329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.869334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.869338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.881195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.881795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.881827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.881835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.882000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.882152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.882166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.882172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.882177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.893770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.894274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.894305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.894314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.894481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.894633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.894644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.894649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.894655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.906387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.907008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.907039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.907048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.907219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.907372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.907380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.907386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.907392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.918972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.919545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.919577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.919586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.919751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.919903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.919911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.919916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.919922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.931650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.932128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.932167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.932177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.932342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.932495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.932504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.932509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.932519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.944251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.944687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.944703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.944709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.944858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.945008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.945015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.945021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.945026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.956884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.957459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.957490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.957499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.957664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.957816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.957823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.957828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.957834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.604 [2024-11-20 10:48:15.969559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.604 [2024-11-20 10:48:15.969975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.604 [2024-11-20 10:48:15.970007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.604 [2024-11-20 10:48:15.970016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.604 [2024-11-20 10:48:15.970187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.604 [2024-11-20 10:48:15.970340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.604 [2024-11-20 10:48:15.970347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.604 [2024-11-20 10:48:15.970353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.604 [2024-11-20 10:48:15.970359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 4620.67 IOPS, 18.05 MiB/s [2024-11-20T09:48:16.242Z] [2024-11-20 10:48:15.982248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:15.982719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:15.982734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:15.982740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:15.982890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:15.983039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:15.983046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:15.983051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:15.983056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:15.994936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:15.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:15.995550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:15.995559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:15.995723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:15.995876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:15.995882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:15.995888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:15.995894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.007614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.007960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.007976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.007982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.008132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.008286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.008293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.008298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.008304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.020292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.020748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.020761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.020767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.020919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.021069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.021075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.021081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.021087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.032929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.033524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.033556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.033565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.033731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.033883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.033890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.033896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.033902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.045635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.046105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.046121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.046127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.046281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.046431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.046438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.046444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.046449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.058296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.058753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.058767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.058772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.058921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.059070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.059084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.059089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.059094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.070939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.071502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.071534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.071543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.071707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.071860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.071867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.071873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.866 [2024-11-20 10:48:16.071879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.866 [2024-11-20 10:48:16.083613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.866 [2024-11-20 10:48:16.084112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.866 [2024-11-20 10:48:16.084127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.866 [2024-11-20 10:48:16.084133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.866 [2024-11-20 10:48:16.084285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.866 [2024-11-20 10:48:16.084435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.866 [2024-11-20 10:48:16.084442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.866 [2024-11-20 10:48:16.084447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.084452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.096308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.096863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.096896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.096905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.097069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.097227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.097235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.097241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.097250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.108971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.109595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.109627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.109636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.109801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.109953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.109960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.109966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.109972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.121554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.122059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.122075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.122081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.122233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.122383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.122391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.122396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.122401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.134252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.134601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.134615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.134621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.134769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.134919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.134925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.134931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.134936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.146934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.147313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.147348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.147357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.147524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.147677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.147684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.147689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.147695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.159609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.160113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.160129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.160135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.160288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.160438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.160445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.160451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.160456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.172312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.172771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.172785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.172791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.172939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.173089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.173095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.173101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.173105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.184964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.185447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.185462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.185467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.185620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.185769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.185776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.185781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.185786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.197636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.198088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.198101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.198106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.198260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.198410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.198416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.867 [2024-11-20 10:48:16.198421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.867 [2024-11-20 10:48:16.198426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.867 [2024-11-20 10:48:16.210302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.867 [2024-11-20 10:48:16.210838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.867 [2024-11-20 10:48:16.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.867 [2024-11-20 10:48:16.210878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.867 [2024-11-20 10:48:16.211043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.867 [2024-11-20 10:48:16.211202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.867 [2024-11-20 10:48:16.211209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.868 [2024-11-20 10:48:16.211215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.868 [2024-11-20 10:48:16.211221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.868 [2024-11-20 10:48:16.222944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.868 [2024-11-20 10:48:16.223363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.868 [2024-11-20 10:48:16.223380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.868 [2024-11-20 10:48:16.223386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.868 [2024-11-20 10:48:16.223536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.868 [2024-11-20 10:48:16.223686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.868 [2024-11-20 10:48:16.223696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.868 [2024-11-20 10:48:16.223701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.868 [2024-11-20 10:48:16.223707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.868 [2024-11-20 10:48:16.235570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.868 [2024-11-20 10:48:16.236025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.868 [2024-11-20 10:48:16.236039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:43.868 [2024-11-20 10:48:16.236044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:43.868 [2024-11-20 10:48:16.236197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:43.868 [2024-11-20 10:48:16.236347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.868 [2024-11-20 10:48:16.236353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.868 [2024-11-20 10:48:16.236359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.868 [2024-11-20 10:48:16.236364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.129 [2024-11-20 10:48:16.248225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.129 [2024-11-20 10:48:16.248672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.129 [2024-11-20 10:48:16.248686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.129 [2024-11-20 10:48:16.248692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.129 [2024-11-20 10:48:16.248841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.129 [2024-11-20 10:48:16.248990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.129 [2024-11-20 10:48:16.248997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.129 [2024-11-20 10:48:16.249002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.129 [2024-11-20 10:48:16.249007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.129 [2024-11-20 10:48:16.260864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.129 [2024-11-20 10:48:16.261479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.129 [2024-11-20 10:48:16.261511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.129 [2024-11-20 10:48:16.261521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.129 [2024-11-20 10:48:16.261686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.129 [2024-11-20 10:48:16.261839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.129 [2024-11-20 10:48:16.261846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.129 [2024-11-20 10:48:16.261852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.129 [2024-11-20 10:48:16.261863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.129 [2024-11-20 10:48:16.273448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.129 [2024-11-20 10:48:16.273797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.273813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.273819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.273968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.274119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.274125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.274131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.274135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.286140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.286717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.286749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.286757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.286923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.287076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.287083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.287089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.287094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.298818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.299177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.299196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.299203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.299354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.299504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.299511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.299516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.299522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.311518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.312024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.312060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.312069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.312240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.312392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.312399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.312405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.312411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.324132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.324599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.324615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.324621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.324771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.324921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.324928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.324933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.324938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.336802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.337120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.337141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.337295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.337445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.337452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.337457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.337462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.130 [2024-11-20 10:48:16.349464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.349821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.349835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.349841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.349991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.350140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.350146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.350152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.350157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.362165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.362653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.362802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.362951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.362959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.362965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.362970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 [2024-11-20 10:48:16.374831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.130 [2024-11-20 10:48:16.375439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.130 [2024-11-20 10:48:16.375472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.130 [2024-11-20 10:48:16.375481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.130 [2024-11-20 10:48:16.375648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.130 [2024-11-20 10:48:16.375802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.130 [2024-11-20 10:48:16.375810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.130 [2024-11-20 10:48:16.375817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.130 [2024-11-20 10:48:16.375823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.130 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.131 [2024-11-20 10:48:16.381407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.131 [2024-11-20 10:48:16.387415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.131 [2024-11-20 10:48:16.387772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.131 [2024-11-20 10:48:16.387789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.131 [2024-11-20 10:48:16.387795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.131 [2024-11-20 10:48:16.387944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.131 [2024-11-20 10:48:16.388095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.131 [2024-11-20 10:48:16.388102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.131 [2024-11-20 10:48:16.388107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.131 [2024-11-20 10:48:16.388113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.131 [2024-11-20 10:48:16.400111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 [2024-11-20 10:48:16.400688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.131 [2024-11-20 10:48:16.400720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.131 [2024-11-20 10:48:16.400729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.131 [2024-11-20 10:48:16.400894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.131 [2024-11-20 10:48:16.401046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.131 [2024-11-20 10:48:16.401053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.131 [2024-11-20 10:48:16.401059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.131 [2024-11-20 10:48:16.401065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.131 [2024-11-20 10:48:16.412809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 [2024-11-20 10:48:16.413169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.131 [2024-11-20 10:48:16.413185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.131 [2024-11-20 10:48:16.413191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.131 [2024-11-20 10:48:16.413341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.131 [2024-11-20 10:48:16.413493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.131 [2024-11-20 10:48:16.413501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.131 [2024-11-20 10:48:16.413511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.131 [2024-11-20 10:48:16.413517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.131 Malloc0 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.131 [2024-11-20 10:48:16.425514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 [2024-11-20 10:48:16.425979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.131 [2024-11-20 10:48:16.425993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.131 [2024-11-20 10:48:16.425998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.131 [2024-11-20 10:48:16.426147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.131 [2024-11-20 10:48:16.426304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.131 [2024-11-20 10:48:16.426311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.131 [2024-11-20 10:48:16.426316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.131 [2024-11-20 10:48:16.426321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.131 [2024-11-20 10:48:16.438184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 [2024-11-20 10:48:16.438781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.131 [2024-11-20 10:48:16.438813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c60000 with addr=10.0.0.2, port=4420 00:30:44.131 [2024-11-20 10:48:16.438822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c60000 is same with the state(6) to be set 00:30:44.131 [2024-11-20 10:48:16.438987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c60000 (9): Bad file descriptor 00:30:44.131 [2024-11-20 10:48:16.439140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:44.131 [2024-11-20 10:48:16.439147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:44.131 [2024-11-20 10:48:16.439152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:44.131 [2024-11-20 10:48:16.439166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:44.131 [2024-11-20 10:48:16.447871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.131 [2024-11-20 10:48:16.450880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.131 10:48:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2236452 00:30:44.131 [2024-11-20 10:48:16.476539] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:45.818 4911.14 IOPS, 19.18 MiB/s [2024-11-20T09:48:19.143Z] 5955.62 IOPS, 23.26 MiB/s [2024-11-20T09:48:20.083Z] 6770.22 IOPS, 26.45 MiB/s [2024-11-20T09:48:21.021Z] 7430.60 IOPS, 29.03 MiB/s [2024-11-20T09:48:22.402Z] 7958.73 IOPS, 31.09 MiB/s [2024-11-20T09:48:23.342Z] 8406.08 IOPS, 32.84 MiB/s [2024-11-20T09:48:24.285Z] 8774.00 IOPS, 34.27 MiB/s [2024-11-20T09:48:25.225Z] 9081.57 IOPS, 35.47 MiB/s 00:30:52.849 Latency(us) 00:30:52.849 [2024-11-20T09:48:25.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.849 Verification LBA range: start 0x0 length 0x4000 00:30:52.849 Nvme1n1 : 15.01 9364.60 36.58 13504.16 0.00 5577.18 378.88 17257.81 00:30:52.849 [2024-11-20T09:48:25.225Z] =================================================================================================================== 00:30:52.849 [2024-11-20T09:48:25.225Z] Total : 9364.60 36.58 13504.16 0.00 5577.18 378.88 17257.81 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.850 rmmod nvme_tcp 00:30:52.850 rmmod nvme_fabrics 00:30:52.850 rmmod nvme_keyring 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2238032 ']' 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2238032 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2238032 ']' 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2238032 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.850 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238032 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238032' 00:30:53.111 killing process with pid 2238032 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2238032 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2238032 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.111 10:48:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.656 10:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.656 00:30:55.656 real 0m28.477s 00:30:55.656 user 1m3.682s 00:30:55.656 sys 0m7.905s 00:30:55.656 10:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.656 10:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:55.656 ************************************ 00:30:55.656 END TEST nvmf_bdevperf 00:30:55.656 ************************************ 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.657 ************************************ 00:30:55.657 START TEST nvmf_target_disconnect 00:30:55.657 ************************************ 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:55.657 * Looking for test storage... 00:30:55.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.657 --rc genhtml_branch_coverage=1 00:30:55.657 --rc genhtml_function_coverage=1 00:30:55.657 --rc genhtml_legend=1 00:30:55.657 --rc geninfo_all_blocks=1 00:30:55.657 --rc geninfo_unexecuted_blocks=1 00:30:55.657 00:30:55.657 ' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.657 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.658 10:48:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:03.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:03.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:03.800 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:03.800 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:03.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.801 10:48:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:31:03.801 00:31:03.801 --- 10.0.0.2 ping statistics --- 00:31:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.801 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:03.801 00:31:03.801 --- 10.0.0.1 ping statistics --- 00:31:03.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.801 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.801 ************************************ 00:31:03.801 START TEST nvmf_target_disconnect_tc1 00:31:03.801 ************************************ 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.801 [2024-11-20 10:48:35.499076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.801 [2024-11-20 10:48:35.499148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0dad0 with addr=10.0.0.2, port=4420 00:31:03.801 [2024-11-20 10:48:35.499181] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:03.801 [2024-11-20 10:48:35.499200] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:03.801 [2024-11-20 10:48:35.499209] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:03.801 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:03.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:03.801 Initializing NVMe Controllers 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.801 00:31:03.801 real 0m0.148s 00:31:03.801 user 0m0.063s 00:31:03.801 sys 0m0.083s 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:03.801 ************************************ 00:31:03.801 END TEST nvmf_target_disconnect_tc1 00:31:03.801 ************************************ 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.801 ************************************ 00:31:03.801 START TEST nvmf_target_disconnect_tc2 00:31:03.801 ************************************ 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:03.801 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2244270 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2244270 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2244270 ']' 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.802 10:48:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.802 [2024-11-20 10:48:35.666635] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:31:03.802 [2024-11-20 10:48:35.666699] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.802 [2024-11-20 10:48:35.768131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.802 [2024-11-20 10:48:35.819945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.802 [2024-11-20 10:48:35.819999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.802 [2024-11-20 10:48:35.820008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.802 [2024-11-20 10:48:35.820015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.802 [2024-11-20 10:48:35.820021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.802 [2024-11-20 10:48:35.822229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:03.802 [2024-11-20 10:48:35.822389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:03.802 [2024-11-20 10:48:35.822549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:03.802 [2024-11-20 10:48:35.822550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 Malloc0 00:31:04.375 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 [2024-11-20 10:48:36.575727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 [2024-11-20 10:48:36.616135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2244346 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:04.376 10:48:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.294 10:48:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2244270 00:31:06.294 10:48:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 [2024-11-20 10:48:38.654863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Read completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.294 starting I/O failed 00:31:06.294 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 [2024-11-20 10:48:38.655237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Read completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 Write completed with error (sct=0, sc=8) 00:31:06.295 starting I/O failed 00:31:06.295 [2024-11-20 10:48:38.655492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:06.295 [2024-11-20 10:48:38.655881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.655909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.656272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.656287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.656583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.656594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.656892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.656903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.657215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.657226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.657535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.657545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.657883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.657894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.658229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.658241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.658552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.658564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.658860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.658871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.659099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.659110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.659349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.659363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.659670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.659681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.659978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.659990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.295 [2024-11-20 10:48:38.660338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.295 [2024-11-20 10:48:38.660350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.295 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.660651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.660664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.661009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.661021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.661354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.661365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.661670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.661680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.662004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.662014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.662240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.662251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.662555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.662565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.662851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.662862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.663216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.663227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.663636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.663648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.663876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.663886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.664206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.664217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.664456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.664467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.664684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.664694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.664934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.664954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.665181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.665601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.665611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.665941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.665951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.666263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.666274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.296 [2024-11-20 10:48:38.666634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.296 [2024-11-20 10:48:38.666645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.296 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.666879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.666892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.667199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.667211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.667626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.667636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.667886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.668149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.668162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.668479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.668489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.668646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.668657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.668965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.668975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.669309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.669320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.669636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.669645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.669924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.669934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.670283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.670293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.670588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.670598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.670908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.670918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.671336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.671564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.671574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.671884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.671894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.672101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.672111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.672456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.672468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.672825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.673130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.673141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.673530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.673541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.673846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.673856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.674231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.674241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.674639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.674648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.674987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.674997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.675319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.675329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.675658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.569 [2024-11-20 10:48:38.675668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.569 qpair failed and we were unable to recover it. 00:31:06.569 [2024-11-20 10:48:38.675857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.675868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.676244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.676255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.676484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.676494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.676820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.676830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.677125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.677135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.677562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.677573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.677781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.677793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.678126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.678445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.678455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.678756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.678766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.679071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.679081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.679404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.679415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.679749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.679759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.680039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.680049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.680414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.680427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.680727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.680748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.681015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.681027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.681388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.681402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.681737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.682045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.682057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.682410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.682423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.682707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.682720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.683000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.683015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.683337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.683350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.683662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.683674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.684025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.684353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.684366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.684703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.684715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.684919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.684931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.685280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.685294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.685592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.685605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.570 qpair failed and we were unable to recover it. 00:31:06.570 [2024-11-20 10:48:38.685908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.570 [2024-11-20 10:48:38.685920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.686214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.686228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.686565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.686577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.686871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.686883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.687198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.687213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.687567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.687580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.687897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.688193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.688207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.688426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.688438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.688747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.688758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.689091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.689103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.689488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.689501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.689803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.689817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.690177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.690191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.690573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.690586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.690905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.690920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.691242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.691255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.691560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.691573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.691888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.691902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.692216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.692229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.692530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.692543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.692913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.692926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.693268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.693280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.693587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.693600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.693902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.693918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.694266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.694663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.694679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.695003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.695020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.695344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.695361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.695680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.695697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.696013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.696031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.571 qpair failed and we were unable to recover it. 00:31:06.571 [2024-11-20 10:48:38.696302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.571 [2024-11-20 10:48:38.696320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.696644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.696660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.696994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.697010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.697348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.697365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.697719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.698028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.698045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.698375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.698393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.698711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.698728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.698916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.698934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.699268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.699612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.699629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.700013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.700029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.700310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.700328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.700690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.700707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.701023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.701041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.701397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.701693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.701710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.701902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.701920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.702240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.702258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.702487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.702503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.702874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.702891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.703207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.703225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.703593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.703611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.703928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.703945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.704313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.704334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.704659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.704675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.704998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.705017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.705357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.705375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.705599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.705616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.705940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.705957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.706173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.706192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.706493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.706510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.706814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.706831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.572 qpair failed and we were unable to recover it. 00:31:06.572 [2024-11-20 10:48:38.707182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.572 [2024-11-20 10:48:38.707200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.707517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.707534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.707942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.707958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.708187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.708204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.708555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.708577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.708914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.708936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.709278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.709300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.709631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.709651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.709982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.710004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.710202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.710223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.710609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.710629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.710965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.710986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.711325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.711347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.711677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.711699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.712055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.712077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.712409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.712430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.712767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.712787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.713530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.713747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.713767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.714179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.714202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.714556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.714576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.714922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.714943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.715274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.715296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.715638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.715659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.716000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.716020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.716373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.716395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.716734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.717073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.717488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.717510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.717912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.573 [2024-11-20 10:48:38.717933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.573 qpair failed and we were unable to recover it. 00:31:06.573 [2024-11-20 10:48:38.718273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.718300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.718635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.718656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.718983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.719006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.719332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.719355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.719666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.719689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.720018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.720040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.720376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.720398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.720759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.720781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.721137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.721169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.721480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.721500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.721833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.721862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.722223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.722623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.722651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.723019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.723047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.723294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.723328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.723676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.723705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.724056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.724087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.724431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.724461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.724814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.724842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.725201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.725230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.725606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.725636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.725902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.725929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.726276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.726306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.726679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.726707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.574 [2024-11-20 10:48:38.727054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.574 [2024-11-20 10:48:38.727081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.574 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.727446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.727477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.727811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.727841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.728205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.728237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.728602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.728630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.728985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.729014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.729380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.729410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.729806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.729836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.730188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.730218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.730581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.730971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.730999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.731337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.731367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.731726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.731754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.732124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.732153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.732625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.732979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.733007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.733365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.733737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.733766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.734132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.734170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.734568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.734937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.734966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.736801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.736868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.737287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.737325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.737683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.737712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.738068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.738471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.738864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.738892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.739271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.739302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.739646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.739676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.740044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.740072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.575 [2024-11-20 10:48:38.740434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.575 [2024-11-20 10:48:38.740464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.575 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.740832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.740862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.741199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.741586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.741973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.742003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.742401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.742740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.742769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.743139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.743182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.743535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.743564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.743927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.743955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.744224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.744624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.744654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.745014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.745042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.745426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.745457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.745822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.745851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.746055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.746088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.746467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.746498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.746753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.746782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.747028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.747056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.747425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.747456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.747820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.747849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.748224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.748256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.748556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.748586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.748924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.748954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.749306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.749336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.749685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.749716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.750048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.750083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.750427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.750459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.750816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.750845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.750998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.576 [2024-11-20 10:48:38.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.576 qpair failed and we were unable to recover it. 00:31:06.576 [2024-11-20 10:48:38.751415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.751447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.751783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.751813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.752146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.752426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.752458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.752857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.753207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.753539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.753568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.753928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.753956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.754330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.754360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.754704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.754733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.755100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.755129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.755487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.755877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.756269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.756300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.757054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.757082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.757449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.757481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.757861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.757890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.758246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.758277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.758635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.758663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.758911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.758939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.759362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.759391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.759740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.759770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.760133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.760563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.760592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.760931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.760959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.761298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.761329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.761687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.761716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.762025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.762054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.762407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.762439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.577 [2024-11-20 10:48:38.762815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.577 qpair failed and we were unable to recover it. 00:31:06.577 [2024-11-20 10:48:38.763183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.763213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.763457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.763488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.763759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.763786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.764135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.764175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.764523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.764553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.764914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.764949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.765317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.765347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.765679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.765707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.766063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.766093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.766418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.766726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.766755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.767112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.767140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.767577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.767607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.767931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.767959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.768326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.768357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.768722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.768752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.769112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.769141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.769501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.769532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.769895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.770283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.770315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.770673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.771098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.771126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.771549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.771917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.771947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.772287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.772317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.772696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.772724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.773087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.773115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.773508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.773897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.774256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.578 [2024-11-20 10:48:38.774288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.578 qpair failed and we were unable to recover it. 00:31:06.578 [2024-11-20 10:48:38.774650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.774678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.775063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.775440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.775824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.775853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.776215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.776246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.776610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.776638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.776994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.777023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.777433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.777463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.777802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.777832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.778205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.778236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.778598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.778628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.779032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.779060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.779387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.779841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.779870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.780202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.780232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.780612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.780648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.780998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.781028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.782843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.782906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.783344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.783380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.783752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.783782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.784021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.784050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.784407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.784439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.784807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.784837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.785199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.785232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.785612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.785644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.786412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.786771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.786799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.787174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.787204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.787589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.579 qpair failed and we were unable to recover it. 00:31:06.579 [2024-11-20 10:48:38.787946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.579 [2024-11-20 10:48:38.787976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.788403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.788433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.788807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.788837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.789187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.789217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.789567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.789597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.789996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.790026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.790383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.790413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.792134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.792213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.792654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.792685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.793028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.793058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.793405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.793436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.793772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.793801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.794183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.794214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.794592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.794623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.794998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.795027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.795393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.795424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.795793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.795823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.796186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.796215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.796625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.796656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.796987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.797016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.798857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.798922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.799354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.799387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.799795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.799825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.800222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.800259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.580 qpair failed and we were unable to recover it. 00:31:06.580 [2024-11-20 10:48:38.800614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.580 [2024-11-20 10:48:38.800644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.801005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.801042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.801331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.801729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.801760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.802116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.802522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.802557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.802921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.802949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.803327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.803364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.803727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.803755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.804013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.804047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.805824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.805886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.806320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.806357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.806728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.806759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.807013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.807047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.807425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.807457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.807863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.807891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.808276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.808307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.808671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.808700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.809060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.809446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.809476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.809851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.809880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.810263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.810636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.810665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.810928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.810956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.811357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.811388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.813204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.813267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.813662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.813696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.814063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.814092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.814495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.814527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.814870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.814899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.581 [2024-11-20 10:48:38.815155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.581 [2024-11-20 10:48:38.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.581 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.815471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.815504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.815745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.815773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.816154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.816531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.816560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.816865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.816904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.817276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.817307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.817680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.817708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.818077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.818105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.818475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.818506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.818864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.818894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.819248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.819286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.819657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.820062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.820090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.820469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.820498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.820919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.820949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.821300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.821330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.821689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.821717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.822086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.822114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.822523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.822556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.822917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.822947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.823293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.823324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.823693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.824085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.824475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.824505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.824864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.824894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.825236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.825267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.825660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.825689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.826053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.826080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.826449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.826478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.826896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.582 [2024-11-20 10:48:38.826925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.582 qpair failed and we were unable to recover it. 00:31:06.582 [2024-11-20 10:48:38.827269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.827649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.827924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.827953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.828354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.828721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.828750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.829124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.829152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.829539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.829568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.829926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.829955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.831743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.832277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.832313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.832680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.832710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.833067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.833095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.833446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.833476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.833838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.833867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.834233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.834264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.834622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.834651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.835008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.835038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.835396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.835425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.835765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.835794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.836152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.836194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.836553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.836590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.836957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.836988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.837326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.837357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.837728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.837757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.838120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.838149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.838542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.838571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.838927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.838957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.839323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.839354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.839742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.840027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.840406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.840436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.840795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.583 [2024-11-20 10:48:38.840824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.583 qpair failed and we were unable to recover it. 00:31:06.583 [2024-11-20 10:48:38.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.841119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.841543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.841573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.841964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.841994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.842358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.842388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.842732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.842760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.843123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.843153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.843587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.843616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.843977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.844006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.844337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.844368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.844803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.845146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.845185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.845410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.845443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.845797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.845829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.846198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.846230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.846661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.846690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.847028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.847057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.847677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.847709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.848064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.848094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.848457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.848487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.848718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.848761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.849116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.849145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.849456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.849485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.849839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.849867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.850228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.850260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.850661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.851095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.851123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.851364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.851398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.851766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.584 [2024-11-20 10:48:38.851795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.584 qpair failed and we were unable to recover it. 00:31:06.584 [2024-11-20 10:48:38.852173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.852205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.852566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.852940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.852969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.853339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.853369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.853729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.853759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.854061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.854089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.854449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.854483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.854845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.854876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.855239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.855269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.855635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.855664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.856023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.856054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.856420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.856450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.856822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.856851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.857213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.857244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.857508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.857537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.857817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.858154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.858195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.858526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.858555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.858950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.858980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.859356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.859388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.859749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.860030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.860061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.860429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.860458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.860818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.860847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.861207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.861238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.861507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.861539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.861830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.861865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.862250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.862282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.862553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.862582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.862986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.863015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.585 qpair failed and we were unable to recover it. 00:31:06.585 [2024-11-20 10:48:38.863422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.585 [2024-11-20 10:48:38.863452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.863812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.863842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.864204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.864235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.864632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.864661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.865050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.865426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.865457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.865831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.865860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.866226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.866257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.866630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.866660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.867017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.867047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.867414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.867445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.867807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.867836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.868199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.868230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.868616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.868645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.868999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.869028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.869287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.869317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.869754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.869783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.870140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.870194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.870545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.870574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.870942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.870972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.871350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.871381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.871725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.871755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.872103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.872132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.872529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.872561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.872918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.586 [2024-11-20 10:48:38.872949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.586 qpair failed and we were unable to recover it. 00:31:06.586 [2024-11-20 10:48:38.873229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.873261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.873630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.873660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.874028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.874058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.874397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.874428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.874776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.874805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.875180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.875213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.875564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.875594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.875950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.875979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.876339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.876369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.876616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.876649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.876997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.877027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.877396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.877437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.877791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.877821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.878204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.878236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.878603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.878633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.878989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.879020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.879384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.879415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.879777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.879807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.880177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.880209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.880563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.880593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.880841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.880870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.881222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.881254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.881636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.881667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.882027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.882057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.882424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.882456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.882813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.882843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.883207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.883239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.883598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.883628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.883978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.884009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.587 qpair failed and we were unable to recover it. 00:31:06.587 [2024-11-20 10:48:38.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.587 [2024-11-20 10:48:38.884433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.884773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.884803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.885184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.885215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.885582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.885611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.885968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.885998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.886337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.886368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.886737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.886766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.887120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.887148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.887515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.887544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.887902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.887932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.888282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.888313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.888688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.888717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.889075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.889104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.889443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.889472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.889838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.889866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.890236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.890267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.890629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.890660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.891034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.891062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.891394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.891425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.891786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.891815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.892179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.892210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.892570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.892963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.892999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.893341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.893372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.893732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.894126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.894154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.894547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.894576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.894954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.894984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.895368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.895708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.895737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.896141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.896541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.896571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.896939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.896969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.897312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.588 [2024-11-20 10:48:38.897343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.588 qpair failed and we were unable to recover it. 00:31:06.588 [2024-11-20 10:48:38.897687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.897975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.898003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.898408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.898788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.898817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.899246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.899277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.899642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.899672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.900033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.900062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.900414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.900443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.900802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.900831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.901193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.901223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.901611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.901640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.902006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.902401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.902430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.902791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.902820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.903233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.903263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.903502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.903531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.903883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.903913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.904270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.904302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.904653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.904681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.904970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.905381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.905412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.905761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.905790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.906170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.906203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.906575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.906604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.906980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.907008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.907357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.907388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.907827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.907855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.908221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.908618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.908660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.908988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.909018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.589 [2024-11-20 10:48:38.909485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.589 [2024-11-20 10:48:38.909516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.589 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.909875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.909904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.910278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.910308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.910669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.910698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.911047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.911077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.911446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.911476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.911887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.911916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.912276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.912689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.912718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.913102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.913132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.913553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.913582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.913963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.914328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.914359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.914724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.914752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.915094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.915124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.915982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.916011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.916394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.916425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.916683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.916716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.917081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.917110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.917531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.917562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.917904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.917935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.918317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.918347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.918723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.918751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.919117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.919146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.919372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.919807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.919837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.920204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.920235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.920589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.920618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.920993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.921021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.921388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.921419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.921819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.921849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.922211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.922241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.922629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.922657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.590 qpair failed and we were unable to recover it. 00:31:06.590 [2024-11-20 10:48:38.923026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.590 [2024-11-20 10:48:38.923054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.923452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.923482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.923720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.923749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.924082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.924110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.924483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.924521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.924916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.924945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.925300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.925330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.925692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.925721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.926170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.926201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.926540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.926571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.926931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.926961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.927331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.927362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.927728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.928121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.928151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.928513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.928544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.928787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.928815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.929202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.929234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.929567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.929596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.929948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.929977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.930354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.930386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.930829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.931238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.931269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.591 [2024-11-20 10:48:38.931656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.591 [2024-11-20 10:48:38.931684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.591 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.932062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.932094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.932356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.932386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.932799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.932827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.933207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.933238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.933623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.933651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.933903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.933935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.934298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.934329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.934592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.934621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.934872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.934901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.935185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.935215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.935616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.935991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.936020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.936389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.936418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.936786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.936814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.937196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.937228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.937596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.937625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.937955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.937984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.938339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.938369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.938513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.938910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.938940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.939299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.939329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.865 [2024-11-20 10:48:38.939732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.865 [2024-11-20 10:48:38.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.865 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.940107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.940138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.940575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.940605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.940960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.940989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.941342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.941372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.941736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.941765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.942135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.942179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.942531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.942560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.942937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.942966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.943334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.943364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.943726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.943757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.944120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.944149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.944533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.944561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.944925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.944956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.945180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.945211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.945595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.945624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.946043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.946420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.946450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.946839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.946867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.947234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.947266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.947687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.947716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.948072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.948101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.948502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.948863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.948892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.949344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.949375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.949769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.949797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.950157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.950201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.950569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.950598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.950996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.951247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.951282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.951551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.951581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.951944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.951972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.952348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.952755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.952981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.953009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.953399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.953429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.953793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.953823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.954199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.866 [2024-11-20 10:48:38.954229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.866 qpair failed and we were unable to recover it. 00:31:06.866 [2024-11-20 10:48:38.954639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.954668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.955007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.955035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.955412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.955449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.955797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.955827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.956188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.956219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.956601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.956630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.956877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.956905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.957273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.957303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.957632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.957667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.957922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.957952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.958307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.958338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.958612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.958640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.958957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.958986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.959334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.959364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.959720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.959749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.960119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.960148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.960514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.960544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.960797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.960825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.961215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.961246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.961477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.961506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.961785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.961813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.962173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.962203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.962555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.962584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.962928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.962957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.963181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.963211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.963593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.963621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.963971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.963999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.964337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.964368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.964765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.965152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.965210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.965619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.965648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.966007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.966036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.966386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.966727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.966756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.967181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.967212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.967582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.967610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.967989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.968017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.968255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.867 [2024-11-20 10:48:38.968285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.867 qpair failed and we were unable to recover it. 00:31:06.867 [2024-11-20 10:48:38.968660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.969057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.969088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.969424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.969454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.969811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.969841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.970213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.970250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.970515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.970543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.970908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.970937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.971307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.971337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.971699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.971727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.972114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.972142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.972538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.972570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.972919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.972949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.973199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.973230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.973613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.973644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.973953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.974360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.974392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.974749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.974779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.975126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.975156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.975572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.975602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.975965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.975994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.976388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.976418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.976668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.976696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.977051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.977079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.977371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.977401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.977635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.977664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.978032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.978062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.978438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.978469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.978841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.979210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.979240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.979620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.980048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.980450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.980480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.980844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.980872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.981272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.981591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.981620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.981873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.981902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.982203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.868 [2024-11-20 10:48:38.982233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.868 qpair failed and we were unable to recover it. 00:31:06.868 [2024-11-20 10:48:38.982604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.982633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.982987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.983015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.983398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.983428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.983789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.983818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.984060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.984089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.984257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.984287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.984655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.984684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.985094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.985135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.985539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.985571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.985803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.985833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.986205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.986238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.986605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.986992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.987020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.987373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.987764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.987794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.988224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.988254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.988501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.988530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.988901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.988930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.989239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.989267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.989688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.989717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.990095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.990124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.990515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.990547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.990797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.990826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.991187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.991218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.991567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.991596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.991940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.991969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.992221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.992254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.992629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.992659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.992964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.992993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.993370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.993399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.993771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.993801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.994155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.994209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.994615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.994644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.994988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.995017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.869 [2024-11-20 10:48:38.995334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.869 [2024-11-20 10:48:38.995365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.869 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.995740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.995770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.996004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.996032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.996464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.996494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.996759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.996788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.997176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.997207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.997556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.997585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.998003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.998398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.998430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.998704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.998733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.999046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.999074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.999446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.999478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:38.999824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:38.999853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.000220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.000258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.000684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.000716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.000954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.000988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.001229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.001261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.001702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.001732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.002019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.002406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.002436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.002831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.002860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.003222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.003253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.003634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.003664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.004024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.004054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.004302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.004331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.004706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.004735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.005092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.005122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.005490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.005522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.005887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.005916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.006301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.006330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.006594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.006622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.007014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.007043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.007401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.007439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.007779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.007808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.008181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.008212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.870 [2024-11-20 10:48:39.008567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.870 [2024-11-20 10:48:39.008597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.870 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.008967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.008996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.009341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.009371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.009732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.009761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.010118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.010147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.010543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.010574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.010837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.010866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.011216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.011246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.011635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.011664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.012031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.012061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.012415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.012446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.012683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.012711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.013067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.013095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.013456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.013486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.013836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.013866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.014199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.014229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.014603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.014632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.014993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.015023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.015394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.015432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.015768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.015797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.016217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.016248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.016600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.016630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.016989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.017017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.017274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.017305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.017708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.017737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.017991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.018019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.018395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.018425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.018786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.018815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.019076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.019105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.019506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.019538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.019898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.019927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.020290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.020321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.020669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.020701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.021085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.021115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.021493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.021523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.021905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.021934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.022291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.022320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.022689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.871 [2024-11-20 10:48:39.022718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.871 qpair failed and we were unable to recover it. 00:31:06.871 [2024-11-20 10:48:39.023089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.023120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.023579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.023611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.023969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.023998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.024340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.024370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.024732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.024761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.025091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.025122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.025502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.025533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.025792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.025824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.026194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.026225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.026574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.026604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.026954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.026983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.027350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.027382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.027752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.027780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.028146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.028189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.028544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.028572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.028817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.028846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.029217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.029248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.029720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.030156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.030198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.030436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.030465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.030837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.030872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.031217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.031247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.031598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.031627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.031876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.031911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.032247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.032656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.032684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.033049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.033077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.033426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.033456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.033870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.033900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.034241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.034271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.034661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.034690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.034950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.034978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.035228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.035258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.035590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.035619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.035888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.035917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.036263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.036292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.036652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.036682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.872 [2024-11-20 10:48:39.037044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.872 [2024-11-20 10:48:39.037074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.872 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.037414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.037444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.037810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.037839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.038199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.038228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.038602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.038632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.038971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.039000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.039360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.039391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.039736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.039765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.040125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.040152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.040522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.040550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.040984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.041016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.041418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.041824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.042181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.042211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.042580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.042609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.042984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.043013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.043384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.043414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.043755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.043783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.044145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.044198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.044471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.044503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.044885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.045250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.045281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.045521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.045554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.045932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.045963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.046230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.046260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.046635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.046663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.047026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.047055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.047409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.047438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.047791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.047821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.048185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.048215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.048579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.048608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.048866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.048894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.049242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.049273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.049643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.049670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.050039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.050068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.050431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.050461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.050709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.050738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.873 [2024-11-20 10:48:39.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.873 [2024-11-20 10:48:39.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.873 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.051520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.051550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.051900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.051929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.052196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.052231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.052628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.053013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.053042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.053415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.053446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.053824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.053854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.054088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.054121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.054466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.054496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.054858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.054889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.055126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.055553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.055582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.055949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.055985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.056339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.056370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.056746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.056776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.057141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.057184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.057534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.057564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.057806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.057839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.058217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.058247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.058618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.058981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.059342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.059371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.059731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.059760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.060122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.060153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.060532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.060560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.060937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.060967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.061339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.061372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.061735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.061763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.062150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.062524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.062554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.063276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.063308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.874 [2024-11-20 10:48:39.063690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.874 [2024-11-20 10:48:39.063720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.874 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.063967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.063999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.064385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.064417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.064783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.065180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.065211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.065575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.065605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.065966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.065997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.066376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.066407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.066764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.066794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.067180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.067211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.067552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.067581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.067952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.067981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.068351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.068381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.068734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.068763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.069144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.069193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.069565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.069595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.069964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.069994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.070355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.070800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.070828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.071192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.071221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.071580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.071614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.071977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.072007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.072358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.072388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.072763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.072792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.073148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.073202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.073627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.073656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.073913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.073941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.074295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.074324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.074640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.074669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.075049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.075078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.075503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.075533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.075895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.075926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.076201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.076232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.076618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.076646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.076890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.076922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.077284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.077315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.077725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.077756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.078124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.078153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.875 qpair failed and we were unable to recover it. 00:31:06.875 [2024-11-20 10:48:39.078522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.875 [2024-11-20 10:48:39.078551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.078815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.078843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.079246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.079276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.079654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.079684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.080043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.080072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.080429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.080460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.080807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.080836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.081205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.081234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.081598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.081627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.081970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.081999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.082347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.082378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.082747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.082777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.083144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.083208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.083588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.083616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.083988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.084017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.084381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.084412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.084772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.084801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.085178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.085209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.085562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.085958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.085986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.086346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.086376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.086740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.086769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.087129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.087174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.087542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.087910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.087938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.088294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.088326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.088667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.088695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.089149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.089564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.089594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.089960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.089988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.090361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.090391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.090754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.090783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.091145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.091187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.091547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.091577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.092010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.092038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.092411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.092441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.092813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.092843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.093215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.876 [2024-11-20 10:48:39.093245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.876 qpair failed and we were unable to recover it. 00:31:06.876 [2024-11-20 10:48:39.093490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.093518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.093892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.093922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.094278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.094310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.094689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.095047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.095076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.095346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.095376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.095739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.095768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.096131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.096175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.096532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.096561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.096808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.096837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.097193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.097224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.097619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.097648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.097990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.098020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.098387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.098419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.098766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.098794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.099168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.099198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.099555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.099583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.099927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.099954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.100204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.100233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.100495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.100524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.100880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.100911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.101280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.101311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.101751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.101780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.102105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.102134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.102439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.102475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.102818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.102849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.103242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.103273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.103659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.103688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.104062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.104090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.104470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.104500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.104860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.104892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.105239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.105270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.105624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.106017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.106045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.106396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.106426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.106786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.106814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.107185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.877 [2024-11-20 10:48:39.107215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.877 qpair failed and we were unable to recover it. 00:31:06.877 [2024-11-20 10:48:39.107588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.107616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.107967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.107996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.108395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.108781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.108809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.109184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.109214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.109629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.109659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.110029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.110058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.110307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.110341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.110699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.110727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.111123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.111504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.111534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.111895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.111924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.112211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.112567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.112596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.113035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.113064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.113521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.113874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.113904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.114253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.114282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.114648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.114676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.115037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.115065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.115427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.115456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.115808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.115838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.116193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.116225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.116635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.116664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.117016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.117045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.117395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.117426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.117801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.117829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.118081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.118119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.118541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.118573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.118924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.118954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.119332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.119362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.119723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.119752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.120121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.120151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.120523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.120552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.120915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.120945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.121310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.121341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.121713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.121741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.878 [2024-11-20 10:48:39.122105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.878 [2024-11-20 10:48:39.122133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.878 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.122391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.122420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.122760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.122789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.123171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.123202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.123611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.123641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.124003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.124031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.124323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.124353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.124720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.124751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.125117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.125147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.125567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.125597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.125838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.125870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.126236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.126266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.126630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.126659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.127025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.127053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.127429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.127459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.127824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.127854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.128209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.128237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.128628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.128998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.129027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.129407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.129448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.129814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.129844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.130206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.130236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.130640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.130669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.131029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.131057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.131317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.131346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.131716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.131745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.132139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.133980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.134042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.134515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.134551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.134913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.134942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.135309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.135347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.135699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.135729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.136088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.879 [2024-11-20 10:48:39.136117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.879 qpair failed and we were unable to recover it. 00:31:06.879 [2024-11-20 10:48:39.136480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.136511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.136934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.136964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.137308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.137339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.137529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.137557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.137922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.137950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.138320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.138351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.138689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.139087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.139118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.139482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.139512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.139863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.139891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.140139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.140183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.140621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.140651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.141018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.141049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.141395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.141425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.141800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.141829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.142066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.142094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.142452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.142483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.142858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.142888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.143257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.143289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.143650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.143681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.144013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.144042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.144287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.144321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.144690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.145086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.145402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.145432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.145767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.145798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.146175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.146207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.146555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.146585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.146950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.146981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.147364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.147395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.147749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.147779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.148135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.148177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.148541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.148571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.148910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.148939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.149282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.149313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.149648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.149677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.150034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.150064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.880 [2024-11-20 10:48:39.150429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.880 [2024-11-20 10:48:39.150470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.880 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.150805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.150835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.151195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.151227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.151601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.151631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.151887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.151916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.152299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.152740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.152770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.153111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.153141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.153498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.153527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.153936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.154311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.154342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.154711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.154740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.155102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.155500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.155530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.155898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.155928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.156275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.156305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.156657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.156686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.157028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.157060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.157440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.157471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.157837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.157866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.158248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.158278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.158646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.158675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.158972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.159001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.159386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.159416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.159694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.159723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.160075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.160104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.160479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.160820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.160850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.161129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.161175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.161552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.161582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.161947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.161977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.162364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.162397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.162740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.162771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.163143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.163204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.163582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.163612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.163942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.163972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.164349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.164380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.164819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.164849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.881 [2024-11-20 10:48:39.165093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.881 [2024-11-20 10:48:39.165121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.881 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.165549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.165580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.165923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.165959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.166361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.166624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.166654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.167003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.167032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.167423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.167455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.167790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.167820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.168044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.168074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.168462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.168494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.168838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.169211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.169242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.169620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.169649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.170009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.170039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.170404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.170437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.170692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.170724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.171199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.171231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.171656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.171685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.171941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.171970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.172313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.172345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.172698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.172729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.173103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.173133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.173491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.173521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.173881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.173909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.174287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.174318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.174687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.174717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.174979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.175008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.175380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.175411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.175792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.175823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.176085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.176116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.176494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.176525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.176884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.176916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.177267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.177298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.177543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.177573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.177928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.177957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.178300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.178332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.178715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.178744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.179109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.882 [2024-11-20 10:48:39.179140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.882 qpair failed and we were unable to recover it. 00:31:06.882 [2024-11-20 10:48:39.179546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.179578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.179853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.180116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.180146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.180502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.180532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.180982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.181018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.181424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.181456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.181719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.182097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.182128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.182558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.182589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.182952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.182981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.183345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.183377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.183831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.183859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.184233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.184265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.184721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.184751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.185141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.185205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.185560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.185589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.185923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.185951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.186325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.186355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.186739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.186769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.187136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.187192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.187444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.187844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.187873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.188276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.188307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.188672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.188701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.189061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.189090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.189461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.189491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.189852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.189880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.190251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.190282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.190616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.190645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.190989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.191019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.191286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.191316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.191682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.191713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.192055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.192083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.883 qpair failed and we were unable to recover it. 00:31:06.883 [2024-11-20 10:48:39.192451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.883 [2024-11-20 10:48:39.192483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.192849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.192877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.193138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.193185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.193483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.193513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.193937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.193967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.194329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.194360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.194723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.194751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.194998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.195026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.195250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.195280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.195589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.195618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.196039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.196067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.196417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.196454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.196810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.196839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.197081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.197108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.197479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.197880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.197910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.198200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.198482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.198512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.198946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.198974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.199214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.199247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.199512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.199903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.199933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.200303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.200335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.200572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.200600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.200966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.200994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.201261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.201291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.201692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.202057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.202087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.202456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.202855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.202885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.203137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.203182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.203548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.203578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.203893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.203931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.204279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.204310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.204681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.204710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.205078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.205557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.205587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.205876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.884 [2024-11-20 10:48:39.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.884 qpair failed and we were unable to recover it. 00:31:06.884 [2024-11-20 10:48:39.206297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.206330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.206706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.206736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.207195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.207531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.207560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.207934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.207963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.208332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.208362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.208740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.208770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.209144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.209563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.209591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.209936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.209965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.210312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.210343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.210731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.210759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.211051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.211080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.211453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.211833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.211864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.212147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.212591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.212620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.212959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.212990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.213382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.213415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.213764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.213794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.214216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.214577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.214607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.214851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.215262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.215293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.215684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.215712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.215983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.216012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.216376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.216407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.216778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.216811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.217081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.217108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.217381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.217827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.217856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.218087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.218116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.218397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.218430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.218757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.218786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.219153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.219201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.219543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.219573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.219934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.219964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.220309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.885 [2024-11-20 10:48:39.220342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.885 qpair failed and we were unable to recover it. 00:31:06.885 [2024-11-20 10:48:39.220682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.220711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.220955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.220983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.221336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.221368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.221666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.221693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.222040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.222070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.222456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.222487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.222838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.222866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.223241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.223271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.223664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.223692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.224034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.224064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.224493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.224858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.224887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.225238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.225267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.225631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.225660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.226017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.226045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.226423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.226460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:06.886 [2024-11-20 10:48:39.226693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.886 [2024-11-20 10:48:39.226722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:06.886 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.227073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.227106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.227528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.227560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.227933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.227962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.228704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.228735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.229093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.229121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.230915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.230976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.231366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.231402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.231693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.231723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.232093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.232122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.232404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.232884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.233179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.233211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.233642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.233672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.234034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.234063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.234425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.234455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.234803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.235195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.235226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.235588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.235617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.235986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.236016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.236374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.236408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.236754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.236783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.237155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.237200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.237548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.237577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.237928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.237958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.238438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.238807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.239180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.239209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.239611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.239640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.239998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.240029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.240383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.240413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.240787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.241199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.241231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.241573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.241602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.241965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.241995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.242345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.242376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.160 [2024-11-20 10:48:39.242737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.160 [2024-11-20 10:48:39.242766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.160 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.243023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.243051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.243445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.243810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.243840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.244200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.244230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.244634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.244865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.244893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.245271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.245301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.245662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.246033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.246062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.246431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.246466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.246826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.246855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.247216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.247246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.247502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.247530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.247911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.248274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.248305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.248677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.248706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.248959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.248988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.249255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.249285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.249665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.250051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.250443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.250474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.250831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.250860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.251223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.251253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.251618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.251647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.251918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.251952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.252341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.252704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.252732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.253091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.253498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.253544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.253905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.253935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.254197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.254231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.254586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.254615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.254967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.254996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.255386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.255415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.255780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.256170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.256201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.256549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.256578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.161 qpair failed and we were unable to recover it. 00:31:07.161 [2024-11-20 10:48:39.256949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.161 [2024-11-20 10:48:39.256977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.257320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.257351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.257718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.257746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.258141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.258483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.258877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.258906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.259264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.259296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.259663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.260046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.260075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.260438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.260468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.260735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.260764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.261182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.261211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.261470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.261498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.261932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.261961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.262328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.262357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.262717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.262746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.263140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.263509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.263540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.263898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.263927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.264299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.264330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.264693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.264721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.264964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.264995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.265332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.265362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.265725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.265753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.266102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.266130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.266482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.266511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.266893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.267239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.267269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.267624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.267651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.268026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.268054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.268327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.268357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.268731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.268765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.269099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.269130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.269495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.269525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.269881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.269910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.270303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.270333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.270694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.270722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.271143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.271184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.162 qpair failed and we were unable to recover it. 00:31:07.162 [2024-11-20 10:48:39.271543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.162 [2024-11-20 10:48:39.271573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.271943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.271972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.272339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.272370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.272751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.272779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.273215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.273245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.273666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.273695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.274096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.274124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.274476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.274508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.274879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.274908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.275303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.275578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.275952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.275983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.276344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.276641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.276669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.277028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.277059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.277396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.277426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.277793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.277823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.278135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.278176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.278349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.278379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.278754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.278782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.279154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.279195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.279547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.279575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.279948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.279977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.280340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.280372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.280728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.280756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.281176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.281206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.281469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.281501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.281746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.281778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.282170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.282200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.282575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.282934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.282962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.283305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.283336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.283718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.283747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.284110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.284145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.284437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.284467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.284801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.284830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.285205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.285235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.285675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.163 [2024-11-20 10:48:39.285704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.163 qpair failed and we were unable to recover it. 00:31:07.163 [2024-11-20 10:48:39.285937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.285971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.286338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.286367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.286805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.287184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.287214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.287595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.287624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.287980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.288009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.288428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.288459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.288815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.288844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.289094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.289122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.289484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.289516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.289855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.289884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.290244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.290275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.290667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.290695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.291063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.291091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.291396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.291738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.291768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.292121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.292150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.292516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.292544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.292920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.293312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.293342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.293608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.293636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.293984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.294014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.294363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.294394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.294758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.294786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.295096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.295134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.295516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.295545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.295950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.295981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.296213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.296566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.296596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.296950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.296979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.297351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.297381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.297746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.164 [2024-11-20 10:48:39.297773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.164 qpair failed and we were unable to recover it. 00:31:07.164 [2024-11-20 10:48:39.298140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.298182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.298543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.298573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.298945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.298973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.299336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.299373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.299768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.299797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.300170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.300199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.300596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.300624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.300862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.300894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.301258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.301290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.301659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.301688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.302104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.302132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.302497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.302527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.302887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.304776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.304837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.305275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.305311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.305679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.305709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.306104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.306387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.306417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.306798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.307208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.307238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.307602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.307632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.308004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.308360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.308391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.308767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.308795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.309156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.309197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.309548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.309577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.309926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.309955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.310315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.310346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.310604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.310632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.310999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.311027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.311487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.311876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.311907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.312286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.312315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.312684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.312713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.313071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.313100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.313460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.313490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.313920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.165 [2024-11-20 10:48:39.313951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.165 qpair failed and we were unable to recover it. 00:31:07.165 [2024-11-20 10:48:39.314281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.314310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.314681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.314710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.315069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.315097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.315463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.315493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.315750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.315778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.316122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.316151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.316540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.316577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.316942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.316971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.317335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.317365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.317658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.317686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.318056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.318085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.318450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.318485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.318724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.318759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.319108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.319139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.319532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.319562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.319918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.319948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.320309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.320339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.320612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.320641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.320995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.321024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.321383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.321413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.321665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.321697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.322061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.322089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.322505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.322535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.322779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.322807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.323174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.323203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.323449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.323481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.323848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.323877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.324312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.324700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.324731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.325071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.325100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.325444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.325476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.325862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.325892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.326156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.326213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.326504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.326535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.326875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.327276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.327676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.327704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.166 [2024-11-20 10:48:39.328062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.166 [2024-11-20 10:48:39.328092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.166 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.328453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.328842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.328871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.329238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.329267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.329627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.329656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.330029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.330057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.330406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.330436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.330786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.330814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.331184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.331213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.331574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.331609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.332012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.332040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.332468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.332498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.332863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.332891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.333237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.333267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.333662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.333690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.334051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.334079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.334346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.334377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.334667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.334696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.335045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.335450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.335480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.335834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.335862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.336219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.336249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.336581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.336610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.336870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.336899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.337256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.337285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.337651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.337679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.338041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.338069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.338433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.338462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.338823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.338853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.339217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.339247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.339606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.339634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.339999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.340027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.340384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.340414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.340773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.340801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.341175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.341205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.341573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.341601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.341904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.341933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.342348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.342378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.167 [2024-11-20 10:48:39.342728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.167 [2024-11-20 10:48:39.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.167 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.343119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.343147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.343513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.343542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.343979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.344007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.344380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.344410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.344776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.344805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.345178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.345207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.345558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.345587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.345958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.345986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.346340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.346730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.346759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.347122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.347155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.347540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.347570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.348000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.348342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.348736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.349082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.349112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.349473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.349502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.349862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.349890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.350256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.350285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.350543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.350930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.350959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.351321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.351351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.351723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.352114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.352143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.352530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.352560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.352927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.352956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.353248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.353278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.353646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.353675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.354044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.354072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.354401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.354430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.354796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.354825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.355185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.355215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.355576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.355603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.355968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.355997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.356254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.356287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.356658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.356686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.357082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.357110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.168 qpair failed and we were unable to recover it. 00:31:07.168 [2024-11-20 10:48:39.357570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.168 [2024-11-20 10:48:39.357600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.357955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.357984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.358386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.358815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.358844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.359208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.359239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.359622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.359651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.360014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.360043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.360406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.360435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.360799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.361193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.361223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.361574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.361602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.361963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.361991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.362358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.362389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.362759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.362794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.363151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.363192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.363546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.363574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.363919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.363947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.364394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.364424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.364776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.364805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.365173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.365203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.365596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.365965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.365992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.366358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.366388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.366734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.366763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.367123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.367151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.367522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.367551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.367907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.367936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.368298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.368619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.368648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.369012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.369041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.369388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.369416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.369853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.369882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.370238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.370268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.370619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.169 [2024-11-20 10:48:39.370648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.169 qpair failed and we were unable to recover it. 00:31:07.169 [2024-11-20 10:48:39.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.371466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.371495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.371844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.371872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.372258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.372639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.372667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.373031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.373059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.373428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.373458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.373819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.373847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.374108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.374136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.374531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.374561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.374945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.375307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.375337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.375703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.375732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.376097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.376473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.376503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.376851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.376881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.377238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.377267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.377635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.378017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.378046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.378460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.378502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.378839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.378869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.379235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.379265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.379634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.379662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.380089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.380117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.380499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.380529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.380885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.380914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.381287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.381317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.381706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.382073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.382102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.382486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.382516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.382864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.382893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.383154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.383547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.383576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.383936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.383965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.384324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.384355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.384718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.384747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.385106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.385135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.170 [2024-11-20 10:48:39.385487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.170 [2024-11-20 10:48:39.385516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.170 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.385879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.385907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.386275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.386306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.386660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.386690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.387039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.387067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.387810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.387839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.388198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.388228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.388603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.388631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.388992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.389022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.389393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.389422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.389784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.389813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.390205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.390599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.390963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.390991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.391363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.391392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.391824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.392208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.392238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.392459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.392490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.392849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.392878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.393254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.393285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.393660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.393690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.394054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.394089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.394448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.394478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.394829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.394857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.395249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.395630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.395659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.396024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.396052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.396454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.396796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.396826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.397215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.397612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.397640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.398007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.398037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.398393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.398423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.398802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.399172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.399571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.399600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.399913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.399941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.171 [2024-11-20 10:48:39.400327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.171 [2024-11-20 10:48:39.400358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.171 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.400717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.400745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.401111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.401139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.401495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.401524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.401882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.402177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.402206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.402468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.402501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.402849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.402878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.403243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.403273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.403527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.403556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.403906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.403935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.404318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.404349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.404713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.404742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.405108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.405137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.405480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.405509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.405905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.405934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.406304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.406334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.406684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.406713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.406872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.406904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.407258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.407289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.407663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.408044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.408073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.408427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.408457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.408827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.408856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.409239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.409276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.409696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.409725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.410118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.410147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.410496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.410527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.410940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.410968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.411257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.411287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.411663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.411692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.412062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.412090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.412346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.412376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.412728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.412757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.413117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.413145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.413547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.413576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.413962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.413990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.414342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.414373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.172 qpair failed and we were unable to recover it. 00:31:07.172 [2024-11-20 10:48:39.414767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.172 [2024-11-20 10:48:39.414796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.415173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.415203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.415468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.415779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.415808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.416183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.416213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.416599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.416628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.416991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.417020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.417285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.417544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.417572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.417969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.418267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.418297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.418688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.418717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.419080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.419109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.419501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.419794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.419822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.420192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.420222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.420554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.420584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.420848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.420877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.421208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.421586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.421963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.421991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.422195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.422228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.422607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.422636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.422905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.422934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.423299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.423329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.423695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.423726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.424082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.424117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.424482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.424513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.424879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.424908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.425272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.425302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.425658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.425686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.426053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.426082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.426426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.426458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.426814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.426843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.173 qpair failed and we were unable to recover it. 00:31:07.173 [2024-11-20 10:48:39.427102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.173 [2024-11-20 10:48:39.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.427425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.427453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.427803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.427831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.428082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.428112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.428469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.428501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.428858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.428889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.429265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.429299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.429655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.429686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.429933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.429964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.430344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.430377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.430739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.430769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.431133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.431174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.431508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.431542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.431900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.431930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.432291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.432324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.432759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.432790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.433151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.433195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.433602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.433892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.433922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.434285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.434320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.434724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.434754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.435086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.435118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.435483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.435873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.435905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.436266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.436298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.436651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.436683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.437045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.437075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.437449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.437482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.437871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.438274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.438631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.438661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.438907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.438936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.439284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.439322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.439681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.440072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.440103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.440467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.440499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.440857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.440887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.441235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.174 [2024-11-20 10:48:39.441269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.174 qpair failed and we were unable to recover it. 00:31:07.174 [2024-11-20 10:48:39.441520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.441553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.441902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.441932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.442303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.442725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.443104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.443134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.443504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.443537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.443899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.443929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.444283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.444315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.444708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.444740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.445120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.445527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.445559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.445956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.445987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.446252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.446284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.446653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.446686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.447051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.447082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.447300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.447331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.447677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.447708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.448333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.448365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.448729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.448761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.449120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.449151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.449528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.449566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.449921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.449954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.450310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.450343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.450706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.450737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.451127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.451168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.451538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.451570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.451933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.451966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.452324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.452356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.452714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.452745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.452994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.453025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.453384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.453416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.453742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.453772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.454179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.454211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.454573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.454604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.454969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.455001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.455384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.455418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.455816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.175 [2024-11-20 10:48:39.455846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.175 qpair failed and we were unable to recover it. 00:31:07.175 [2024-11-20 10:48:39.456209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.456240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.456609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.457004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.457035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.457399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.457432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.457792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.457823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.458216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.458649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.458679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.459037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.459069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.459406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.459439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.459789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.459821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.460179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.460210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.460568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.460600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.460962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.460994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.461371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.461404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.461762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.462062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.462093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.462483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.462515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.462803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.462833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.463183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.463215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.463574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.463606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.463969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.463999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.464361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.464394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.464811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.464842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.465061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.465096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.465548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.465887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.465918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.466279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.466310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.466682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.466714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.467122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.467153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.467541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.467572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.467934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.467964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.468386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.468419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.468776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.468808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.469196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.469455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.469485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.469841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.469871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.470231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.470265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.470646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.176 [2024-11-20 10:48:39.470678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.176 qpair failed and we were unable to recover it. 00:31:07.176 [2024-11-20 10:48:39.471038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.471069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.471412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.471446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.471669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.471698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.472057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.472087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.472459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.472493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.472843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.472873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.473234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.473268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.473641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.473671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.474037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.474074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.474415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.474446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.474698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.474728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.475083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.475114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.475536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.475925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.475959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.476313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.476345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.476701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.476732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.477135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.477517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.477547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.477902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.477933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.478305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.478339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.478694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.478725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.479082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.479113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.479508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.479900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.479931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.480290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.480322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.480714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.480750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.481089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.481122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.481535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.481567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.481922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.481952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.482310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.482342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.482698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.482728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.483101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.483132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.483501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.483533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.483893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.483924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.484198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.484230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.484607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.484638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.484989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.177 [2024-11-20 10:48:39.485394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.177 [2024-11-20 10:48:39.485426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.177 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.485782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.485814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.486176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.486208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.486606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.486951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.486984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.487335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.487366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.487723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.487754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.488109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.488141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.488482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.488514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.488865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.488897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.489233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.489266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.489635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.489665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.490067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.490098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.490473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.490506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.490858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.490890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.491245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.491278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.491653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.491684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.492068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.492336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.492367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.492733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.492764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.493117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.493149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.493600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.493942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.493975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.494325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.494358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.494716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.494747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.495102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.495134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.495536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.495569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.495914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.495945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.496300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.496337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.496687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.496718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.497049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.497080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.497446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.497477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.497814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.497847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.498200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.178 [2024-11-20 10:48:39.498232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.178 qpair failed and we were unable to recover it. 00:31:07.178 [2024-11-20 10:48:39.498596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.498628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.498989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.499019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.499359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.499392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.499746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.499777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.500015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.500048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.500310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.500342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.500726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.500757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.501116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.501147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.501538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.501570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.501930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.501962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.502317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.502349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.502721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.502751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.503116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.503147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.503545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.503577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.503821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.504202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.504234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.504530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.504561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.504910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.504940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.505287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.505318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.505666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.505697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.506054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.506086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.506455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.506488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.506876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.506908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.507259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.507290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.507642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.508032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.508064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.508416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.508448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.508807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.508838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.509189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.509222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.509472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.509503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.509849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.509882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.510242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.510275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.510629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.510660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.511017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.511413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.511451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.511836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.511867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.512231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.179 [2024-11-20 10:48:39.512264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.179 qpair failed and we were unable to recover it. 00:31:07.179 [2024-11-20 10:48:39.512631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.512663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.513053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.513409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.513440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.513792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.514219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.514251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.514616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.514648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.515008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.515039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.515391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.515424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.515788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.515819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.516182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.516214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.516616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.516648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.517072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.517103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.517505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.517537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.517887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.517920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.518280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.518311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.518674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.518704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.519064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.519097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.519445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.519478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.519804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.519834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.180 [2024-11-20 10:48:39.520187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.180 [2024-11-20 10:48:39.520221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.180 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.520611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.520644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.520995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.521027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.521388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.521421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.521786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.521817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.522153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.522200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.522396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.522427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.522843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.522875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.523227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.523260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.523623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.523655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.524035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.524409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.524440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.524800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.524832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.525188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.525221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.525613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.526001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.526033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.526406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.526438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.526788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.526818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.527181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.527221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.527576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.527607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.527960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.527990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.528340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.528371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.528746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.528777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.529142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.529185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.529536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.530001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.530031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.530270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.530304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.453 [2024-11-20 10:48:39.530541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.453 [2024-11-20 10:48:39.530572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.453 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.530925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.530956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.531320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.531352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.532103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.532134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.532525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.532557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.532911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.532943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.533298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.533330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.533682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.533713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.534116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.534147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.534503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.534534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.534889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.534921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.535284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.535316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.535657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.535688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.536045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.536076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.536324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.536358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.536712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.536743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.537101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.537132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.537522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.537554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.537930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.537963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.538330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.538363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.538736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.538766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.539127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.539181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.539550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.539580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.539945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.539976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.540339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.540373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.540732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.540763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.541113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.541145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.541504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.541536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.541886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.541916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.542279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.542315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.542694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.542730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.542974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.543003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.543373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.543405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.543760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.543791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.544144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.544186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.544533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.544565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.544933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.454 [2024-11-20 10:48:39.544964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.454 qpair failed and we were unable to recover it. 00:31:07.454 [2024-11-20 10:48:39.545322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.545354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.545709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.545740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.546130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.546173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.546549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.546579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.546932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.546964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.547194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.547227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.547581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.547612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.547971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.548004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.548387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.548418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.548779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.548811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.549216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.549249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.549603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.549634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.549988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.550018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.550381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.550413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.550772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.550802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.551177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.551209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.551558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.551590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.551980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.552010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.552386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.552419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.552773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.552804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.553170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.553203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.553430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.553464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.553810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.553840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.554210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.554244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.554488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.554523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.554874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.555254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.555636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.555667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.556034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.556064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.556405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.556438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.556793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.556823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.557182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.557214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.557569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.557601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.557971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.558009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.558352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.558385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.558753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.558785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.559141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.455 [2024-11-20 10:48:39.559190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.455 qpair failed and we were unable to recover it. 00:31:07.455 [2024-11-20 10:48:39.559536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.559565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.559924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.559955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.560346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.560699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.560729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.561090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.561121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.561477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.561510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.561868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.561898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.562265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.562651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.562682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.563042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.563073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.563441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.563826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.563857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.564212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.564246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.564619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.564650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.565014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.565046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.565282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.565313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.565682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.565713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.566070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.566103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.566465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.566497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.566854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.566887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.567236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.567269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.567635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.567666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.567911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.567940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.568190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.568227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.568584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.568615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.569069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.569430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.569463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.569817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.569848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.570215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.570247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.570491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.570522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.570879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.570908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.571269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.571301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.571544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.571577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.571922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.571953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.572312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.572345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.572590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.572623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.573003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.573041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.573388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.456 [2024-11-20 10:48:39.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.456 qpair failed and we were unable to recover it. 00:31:07.456 [2024-11-20 10:48:39.573781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.574180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.574213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.574587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.574617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.574987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.575017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.575381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.575769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.575801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.576177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.576210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.576842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.576873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.577219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.577253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.577604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.577635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.578007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.578038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.578281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.578312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.579070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.579458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.579490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.579848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.579878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.580274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.580639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.580672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.581025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.581057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.581440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.581660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.581690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.581943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.581975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.582353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.582384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.582747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.582777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.583136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.583193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.583536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.583566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.583924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.583954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.584314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.584346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.584694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.584726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.585072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.585103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.585468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.457 [2024-11-20 10:48:39.585502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.457 qpair failed and we were unable to recover it. 00:31:07.457 [2024-11-20 10:48:39.585859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.585889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.586236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.586270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.586647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.586678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.587039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.587070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.587713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.587744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.588106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.588536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.588568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.588922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.588953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.589310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.589342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.589692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.589725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.590079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.590478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.590510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.590899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.590929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.591287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.591321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.591679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.591710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.592138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.592180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.592548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.592578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.592939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.592971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.593370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.593721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.593751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.594112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.594143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.594443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.594478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.594860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.594892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.595256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.595289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.595646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.595679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.596036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.596068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.596457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.596832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.597562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.597593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.597950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.597980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.598358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.598391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.598739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.598771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.599122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.599153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.599550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.599581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.599941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.458 qpair failed and we were unable to recover it. 00:31:07.458 [2024-11-20 10:48:39.600336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.458 [2024-11-20 10:48:39.600371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.600730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.600760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.601133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.601173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.601527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.601557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.601917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.601948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.602319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.602688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.602718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.603088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.603121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.603512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.603544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.603894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.603931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.604325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.604359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.604775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.604806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.605156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.605546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.605578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.605951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.605981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.606340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.606372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.606765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.607126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.607157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.607566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.607597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.607953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.607984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.608340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.608371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.608722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.608753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.609117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.609148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.609568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.609600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.610026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.610056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.610418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.610452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.610807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.610838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.611204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.611236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.611593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.611625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.611901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.612308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.612664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.612696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.613052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.613084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.613437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.613469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.613825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.613857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.614216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.614247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.614630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.615019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.459 [2024-11-20 10:48:39.615050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.459 qpair failed and we were unable to recover it. 00:31:07.459 [2024-11-20 10:48:39.615419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.615452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.615797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.615829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.616184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.616217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.616576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.616607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.616970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.617001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.617363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.617394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.617764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.617794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.618265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.618299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.618545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.618575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.618926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.618957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.619230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.619587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.619624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.619971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.620004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.620369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.620402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.620835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.620865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.621216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.621250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.621610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.621640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.621989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.622021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.622399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.622430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.622770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.622801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.623167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.623198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.623551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.623582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.623942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.623974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.624313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.624346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.624715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.624745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.625110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.625141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.625528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.625559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.625905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.625937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.626361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.626392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.626811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.626843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.627215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.627254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.627604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.627634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.627878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.627909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.628260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.628291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.628658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.628689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.629072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.629431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.629462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.629696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.460 [2024-11-20 10:48:39.629725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.460 qpair failed and we were unable to recover it. 00:31:07.460 [2024-11-20 10:48:39.630098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.630129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.630572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.630943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.630975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.631326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.631357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.631716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.631747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.632181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.632216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.632569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.632599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.632947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.632979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.633339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.633372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.633721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.633751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.634104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.634135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.634498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.634531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.634883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.634913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.635336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.635375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.635753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.635784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.636142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.636184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.636559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.636925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.636957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.637316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.637348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.637700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.637731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.638084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.638115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.638472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.638504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.638745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.638778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.639134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.639174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.639531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.639563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.639916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.639947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.640279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.640311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.641088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.641118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.641585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.641620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.641967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.641999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.642385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.642420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.642772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.642803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.643035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.643065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.643435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.643468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.643830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.643862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.644219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.644250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.461 [2024-11-20 10:48:39.644617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.461 [2024-11-20 10:48:39.644649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.461 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.644883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.644915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.645283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.645315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.645675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.645715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.646065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.646096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.646530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.646917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.646948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.647303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.647334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.647694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.648078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.648110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.648469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.648502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.648854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.648885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.649244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.649277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.649639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.649669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.650035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.650068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.650422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.650454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.650805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.650837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.651197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.651229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.651589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.651621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.651974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.652005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.652340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.652373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.652741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.652773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.653128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.653171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.653504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.653536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.653924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.654280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.654313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.654669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.654700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.655061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.655092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.655455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.655487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.655844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.655874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.656132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.656176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.656570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.656602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.656986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.462 qpair failed and we were unable to recover it. 00:31:07.462 [2024-11-20 10:48:39.657246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.462 [2024-11-20 10:48:39.657278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.657638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.657670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.657908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.657941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.658307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.658706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.658738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.659099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.659130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.659513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.659545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.659807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.660226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.660259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.660515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.660547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.660921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.660959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.661308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.661341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.661723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.661754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.662106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.662137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.662558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.662590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.662941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.662972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.663340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.663373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.663754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.663785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.664177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.664210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.664597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.664628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.664991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.665022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.665385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.665418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.665796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.665828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.666200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.666231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.666623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.666656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.666911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.666944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.667190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.667229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.667598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.667630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.667955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.667988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.668229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.668262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.668634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.669012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.669042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.669385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.669417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.669685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.669715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.669961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.669990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.670337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.670368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.670610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.670641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.670887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.463 [2024-11-20 10:48:39.670917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.463 qpair failed and we were unable to recover it. 00:31:07.463 [2024-11-20 10:48:39.671265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.671297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.671557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.671591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.671938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.671970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.672329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.672362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.672773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.673101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.673130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.673521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.673553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.673931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.673963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.674318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.674349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.674711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.674744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.675107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.675138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.675549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.675581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.675831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.675868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.676101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.676131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.676321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.676352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.676704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.676734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.677095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.677127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.677525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.677557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.677907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.677939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.678339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.678693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.678724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.679113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.679143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.679535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.680338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.680372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.680748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.680778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.681059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.681290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.681322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.681702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.681734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.682096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.682127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.682513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.682547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.682952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.683312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.683343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.683709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.683740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.684042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.684072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.684438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.684470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.684831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.684863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.685236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.464 [2024-11-20 10:48:39.685269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.464 qpair failed and we were unable to recover it. 00:31:07.464 [2024-11-20 10:48:39.685703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.685734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.686095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.686126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.686519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.686553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.686786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.686816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.687188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.687220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.687603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.687635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.687989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.688019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.688383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.688416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.688784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.688815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.689225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.689256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.689615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.689644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.690009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.690039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.690496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.690528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.690882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.690913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.691249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.691288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.691656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.691687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.692050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.692081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.692418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.692452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.692818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.692850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.693215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.693248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.693605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.693635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.693992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.694022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.694378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.694411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.694769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.694800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.695155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.695201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.695601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.695631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.695987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.696018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.696383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.696416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.696789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.696821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.697181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.697568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.697601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.697960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.697990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.698339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.698737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.698769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.699128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.699170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.699531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.699562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.699796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.699825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.700266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.465 [2024-11-20 10:48:39.700298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.465 qpair failed and we were unable to recover it. 00:31:07.465 [2024-11-20 10:48:39.700681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.700714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.701085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.701116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.701497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.701528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.701884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.701916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.702287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.702318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.702684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.702716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.703084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.703116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.703493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.703898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.703930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.704180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.704217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.704597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.704628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.705020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.705051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.705405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.705440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.705702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.705733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.706109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.706142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.706519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.706551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.706942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.706980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.707339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.707373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.707720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.707750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.707990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.708023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.708396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.708428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.708790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.708820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.709197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.709624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.710095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.710126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.710497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.710897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.710929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.711278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.711311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.711687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.711717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.712103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.712134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.712543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.712575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.712943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.712974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.713394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.713778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.713810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.714147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.714191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.714561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.714592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.714970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.715001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.466 [2024-11-20 10:48:39.715434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.466 [2024-11-20 10:48:39.715466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.466 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.715826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.715856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.716206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.716237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.716515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.716545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.716902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.716932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.717302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.717585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.717619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.717986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.718017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.718436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.718469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.718851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.719088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.719499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.719531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.719890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.719922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.720368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.720400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.720749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.720781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.721145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.721542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.721981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.722013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.722379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.722410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.722780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.722817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.723087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.723467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.723498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.723847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.723877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.724310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.724342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.724733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.725138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.725182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.725555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.725586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.725956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.725988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.726337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.726369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.726735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.727153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.727535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.727566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.727922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.727954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.728318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.467 [2024-11-20 10:48:39.728350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.467 qpair failed and we were unable to recover it. 00:31:07.467 [2024-11-20 10:48:39.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.728734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.729381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.729735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.729767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.730124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.730155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.730532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.730563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.730918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.730949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.731302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.731335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.731702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.731732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.732087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.732118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.732512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.732544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.732906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.732938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.733279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.733310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.733679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.733710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.734053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.734085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.734800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.734831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.735034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.735066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.735368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.735400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.735760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.735793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.736150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.736192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.736557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.736588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.737017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.737356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.737390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.737749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.737779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.738155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.738601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.738813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.738842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.739211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.739615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.739646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.740026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.740394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.740426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.740791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.740822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.741179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.741210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.741566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.741596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.741955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.741986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.742351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.742384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.742742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.468 [2024-11-20 10:48:39.742773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.468 qpair failed and we were unable to recover it. 00:31:07.468 [2024-11-20 10:48:39.743131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.743175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.743531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.743561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.743921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.743951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.744383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.744765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.744797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.745155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.745199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.745593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.745623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.745984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.746015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.746380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.746413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.746772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.746801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.747180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.747214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.747577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.747607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.747961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.747991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.748340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.748372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.748735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.748766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.749125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.749156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.749529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.749560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.749915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.749947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.750320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.750354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.750744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.750775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.751120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.751152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.751456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.751486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.751851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.751881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.752245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.752280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.752682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.752713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.753074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.753105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.753353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.753388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.753736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.753772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.754172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.754559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.754590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.754937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.754969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.755319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.755350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.755704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.755734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.756092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.756565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.756597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.756945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.756977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.757337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.469 [2024-11-20 10:48:39.757736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.469 [2024-11-20 10:48:39.757767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.469 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.758150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.758395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.758430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.758786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.758818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.759180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.759213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.759596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.759954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.759986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.760336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.760367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.760716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.760747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.761099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.761130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.761426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.761458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.761804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.761836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.762188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.762221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.762623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.762654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.763010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.763040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.763298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.763333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.763728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.763759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.764150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.764194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.764577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.764933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.764965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.765322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.765355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.765705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.765736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.766095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.766533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.766885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.767349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.767732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.767764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.768133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.768174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.768534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.768564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.768911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.768943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.769302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.769665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.769697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.770049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.770080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.770444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.770476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.770830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.770863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.771218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.771250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.771615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.771645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.772002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.772033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.470 qpair failed and we were unable to recover it. 00:31:07.470 [2024-11-20 10:48:39.772395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.470 [2024-11-20 10:48:39.772427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.772788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.772818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.773185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.773218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.773573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.773603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.773961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.773992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.774299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.774330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.774569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.774600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.774953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.774983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.775338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.775373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.775711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.775742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.776101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.776504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.776535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.776932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.777286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.777318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.777677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.777956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.777988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.778343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.778376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.778737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.778769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.779188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.779220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.779627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.779659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.780039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.780417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.780449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.780840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.781229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.781261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.781621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.782006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.782038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.782300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.782331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.782688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.782719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.783076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.783108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.783516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.783548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.783893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.783925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.784283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.784315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.784691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.471 [2024-11-20 10:48:39.784727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.471 qpair failed and we were unable to recover it. 00:31:07.471 [2024-11-20 10:48:39.784968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.785001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.785393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.785425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.785770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.785801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.786170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.786202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.786553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.786584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.786944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.786976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.787355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.787387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.787736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.787768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.788114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.788145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.788520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.788552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.788915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.788950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.789319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.789353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.789718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.789751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.790109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.790142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.790565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.790598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.790958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.790991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.791337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.791369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.791718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.791749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.792109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.792141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.792505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.792536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.792910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.792941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.793300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.793336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.793663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.794013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.794044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.794413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.794447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.794800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.794831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.795197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.795231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.795603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.795635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.796006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.796038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.796397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.796431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.796790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.796821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.797183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.797614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.797974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.798006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.798365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.798398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.798756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.798787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.799156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.799582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-20 10:48:39.799614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-20 10:48:39.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.800005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.800363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.800400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.800753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.800785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.801152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.801193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.801564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.801594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.801834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.801867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.802272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.802305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.802696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.802727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.803091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.803123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.803493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.803525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.803874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.803904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.804265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.804299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.804647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.804679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.805077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.805437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.805468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.805852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.805885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.806241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.806273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.806632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.806663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.807022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.807052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.807412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.807443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.807803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.807834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.808195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.808228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.808451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.808485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.808848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.808880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.809233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.809265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.809634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.809664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.810025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.810057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.810427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.810460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.810829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.810866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.811216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.811247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.811677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.811706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.812055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.812345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.812376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.812725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.812756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.813109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.813141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.813511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.813892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.813925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-20 10:48:39.814289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-20 10:48:39.814322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-20 10:48:39.814693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-20 10:48:39.814724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.815090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.815125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.815504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.815538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.815940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.815979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.816333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.816365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.816764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.816796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.817148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.817191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.817577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.817611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.817965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.817997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.818340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.818373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.818725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.818756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.750 qpair failed and we were unable to recover it. 00:31:07.750 [2024-11-20 10:48:39.819116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.750 [2024-11-20 10:48:39.819148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.819512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.819543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.819915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.819947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.820297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.820330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.820705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.820738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.821100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.821131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.821521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.821554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.821911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.821943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.822297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.822331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.822692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.822724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.823077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.823108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.823505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.823898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.823929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.824288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.824321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.824677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.824710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.825061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.825092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.825450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.825483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.825851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.825883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.826256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.826289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.826551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.826583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.826927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.826958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.827311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.827343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.827732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.828069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.828101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.828493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.828525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.828887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.828919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.829271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.829304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.829660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.829950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.829980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.830326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.830360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.830609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.830640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.830994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.831026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.831394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.831433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.831811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.831843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.832206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.832241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.832606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.832638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.833036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.833069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.751 [2024-11-20 10:48:39.833303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.751 [2024-11-20 10:48:39.833340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.751 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.833691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.833723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.834077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.834109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.834496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.834882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.834914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.835267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.835299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.835659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.835690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.836051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.836082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.836446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.836478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.836869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.836902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.837283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.837317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.837666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.837696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.838068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.838101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.839069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.839119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.839545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.839582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.839931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.839964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.840184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.840222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.840490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.840521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.840872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.841262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.841295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.841657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.841687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.842048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.842079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.842463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.842505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.842848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.842880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.843240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.843271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.843638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.843668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.844025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.844056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.844424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.844457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.844698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.844729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.845081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.845111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.845364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.845399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.845651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.845682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.845925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.845959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.846312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.846344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.846737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.846767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.847120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.847554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.847585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.847933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.847967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.752 [2024-11-20 10:48:39.848232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.752 [2024-11-20 10:48:39.848266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.752 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.849309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.849367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.849713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.849747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.850087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.850528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.850563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.850916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.850950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.851311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.851343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.851699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.851731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.852090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.852121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.852506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.852538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.852892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.852923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.853283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.853674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.853704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.854063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.854465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.854497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.854734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.854765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.855120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.855151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.855535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.855566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.855912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.855942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.856297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.856696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.856726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.857081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.857112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.857469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.857501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.857854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.857886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.858241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.858284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.858637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.858950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.859296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.859328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.859693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.859724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.860082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.860115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.860509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.860540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.860883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.860914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.861277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.861310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.861676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.861706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.861970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.861999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.862372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.862721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.862751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.863100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.863130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.753 qpair failed and we were unable to recover it. 00:31:07.753 [2024-11-20 10:48:39.863552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.753 [2024-11-20 10:48:39.863584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.863938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.863969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.864334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.864366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.864725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.864756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.865123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.865155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.865528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.865559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.865916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.865948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.866310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.866343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.866732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.867083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.867116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.867493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.867875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.867906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.868271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.868305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.868712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.869060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.869091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.869381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.869758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.870096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.870532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.870564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.870910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.870940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.871291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.871323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.871671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.871703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.872049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.872081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.872411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.872443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.872801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.872831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.873186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.873218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.873610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.873967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.873999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.874391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.874423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.874776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.874808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.875175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.875206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.875565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.875595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.876025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.876056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.876410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.876443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.876799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.876833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.877185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.877218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.877577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.877608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.877958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.754 [2024-11-20 10:48:39.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.754 qpair failed and we were unable to recover it. 00:31:07.754 [2024-11-20 10:48:39.878346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.878378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.878726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.878758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.879109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.879141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.879513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.879545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.879903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.879934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.880280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.880313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.880674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.880704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.881052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.881083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.881447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.881479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.881679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.881714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.882065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.882096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.882461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.882496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.882848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.882879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.883225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.883259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.883633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.883664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.884026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.884058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.884413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.884444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.884801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.884834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.885198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.885230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.885562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.885941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.885970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.886421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.886453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.886845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.886876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.887227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.887261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.887624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.887655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.888011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.888391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.888423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.888769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.888799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.889155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.889203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.889555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.889585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.889942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.889973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.890349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.890382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.755 [2024-11-20 10:48:39.890722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.755 [2024-11-20 10:48:39.890754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.755 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.891102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.891134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.891515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.891547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.891902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.891933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.892368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.892402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.892751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.892782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.893143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.893183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.893550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.893580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.893955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.893986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.894351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.894385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.894741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.894772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.895125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.895181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.895550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.895580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.895947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.895978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.896332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.896367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.896715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.896745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.897100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.897131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.897495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.897528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.897880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.897909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.898319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.898351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.898730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.898763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.899115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.899146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.899574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.899606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.899962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.899995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.900336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.900369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.900729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.900760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.901118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.901150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.901518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.901549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.901918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.901949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.902326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.902359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.902603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.902636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.903029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.903060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.903415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.903447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.903806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.903838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.904195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.904227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.904610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.904640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.905006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.905043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.756 [2024-11-20 10:48:39.905412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.756 [2024-11-20 10:48:39.905447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.756 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.905794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.905825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.906196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.906230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.906626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.906657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.907088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.907119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.907535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.907568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.907849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.908239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.908271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.908623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.908655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.909050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.909413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.909447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.909850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.909881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.910250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.910282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.910641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.910671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.911032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.911064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.911408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.911439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.911802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.911833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.912074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.912105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.912555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.912588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.912934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.912966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.913338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.913370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.913741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.913772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.914071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.914575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.914609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.914949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.914980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.915337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.915368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.915724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.915758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.916102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.916134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.916504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.916537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.916885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.917285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.917318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.917580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.917611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.917959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.917992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.918339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.918372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.918719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.919100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.919515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.919547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.919896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.919925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.757 [2024-11-20 10:48:39.920293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.757 [2024-11-20 10:48:39.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.757 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.920700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.920737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.921081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.921114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.921491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.921524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.921879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.921911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.922184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.922584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.922615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.922988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.923388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.923421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.923805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.923836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.924081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.924110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.924496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.924529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.924914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.924946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.925348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.925380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.925749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.925781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.926129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.926183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.926592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.926952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.926984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.927352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.927384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.927763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.927794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.928176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.928208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.928585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.928614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.928981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.929011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.929364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.929396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.929767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.929796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.930152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.930205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.930496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.930526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.930917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.930948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.931307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.931697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.931726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.932090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.932121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.932567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.932600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.932950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.932983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.933356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.933389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.933740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.933772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.934133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.934561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.934591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.934839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.758 [2024-11-20 10:48:39.934869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.758 qpair failed and we were unable to recover it. 00:31:07.758 [2024-11-20 10:48:39.935130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.935177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.935559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.935591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.935948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.935980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.936341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.936387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.936769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.937128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.937171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.937577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.937608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.937962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.937993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.938369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.938401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.938760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.938792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.939151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.939194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.939542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.939574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.939933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.939963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.940322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.940355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.940714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.940746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.940979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.941008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.941188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.941219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.941686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.941718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.942074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.942105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.942496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.942529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.942880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.942911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.943147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.943190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.943558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.943590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.943837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.943868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.944226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.944259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.944611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.944642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.945001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.945033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.945422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.945454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.945824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.946207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.946239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.946369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.946398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.946792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.946823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.947189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.947220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.947453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.759 [2024-11-20 10:48:39.947483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.759 qpair failed and we were unable to recover it. 00:31:07.759 [2024-11-20 10:48:39.947706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.948099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.948130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.948507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.948539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.948782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.948812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.949177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.949210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.949626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.949988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.950019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.950399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.950432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.950802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.950835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.951191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.951229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.951634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.951664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.952016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.952047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.952385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.952418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.952803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.952833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.953106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.953538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.953572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.953932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.953963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.954325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.954357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.954585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.954615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.954861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.954892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.955256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.955288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.955657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.955689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.956060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.956090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.956330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.956361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.956707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.956739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.957099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.957129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.957546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.957579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.957939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.957970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.958331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.958363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.958736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.958768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.959016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.959047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.959409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.959813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.959844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.960181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.960214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.960567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.960599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.960862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.960892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.961258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.961293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.760 [2024-11-20 10:48:39.961655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.760 [2024-11-20 10:48:39.961686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.760 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.962097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.962128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.962548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.962792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.962823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.963191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.963224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.963508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.963539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.963760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.963793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.964040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.964072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.964405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.964783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.964815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.965207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.965614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.965645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.965998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.966036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.966417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.966449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.966813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.966843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.967214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.967248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.967620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.967650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.967983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.968378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.968410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.968775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.968808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.969178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.969209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.969574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.969607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.969967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.969999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.970350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.970383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.970767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.970799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.971033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.971064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.971447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.971481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.971837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.971867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.972241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.972274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.972677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.972708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.973139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.973180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.973444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.973475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.973809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.973840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.974219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.974251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.974608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.974640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.975016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.975046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.975292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.975723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.761 [2024-11-20 10:48:39.975919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.761 [2024-11-20 10:48:39.975949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.761 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.976337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.976370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.976719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.976750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.977108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.977140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.977549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.977927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.977959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.978389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.978421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.978768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.978800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.979170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.979204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.979587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.979618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.979971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.980002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.980369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.980403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.980760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.980791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.981132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.981175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.981567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.981603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.981950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.981981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.982372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.982728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.982758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.983127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.983170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.983569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.983599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.984031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.984331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.984363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.984718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.984748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.985104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.985135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.985520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.985552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.985947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.986364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.986696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.986727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.987101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.987132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.987490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.987885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.988236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.988269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.988632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.988663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.988989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.989023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.989340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.989372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.989736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.989766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.990111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.990142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.990534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.990566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.762 qpair failed and we were unable to recover it. 00:31:07.762 [2024-11-20 10:48:39.990918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.762 [2024-11-20 10:48:39.990949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.991306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.991337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.991700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.991732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.992072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.992104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.992539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.992571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.992919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.992950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.993311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.993344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.993715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.993746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.994100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.994132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.994494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.994527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.994877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.994907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.995269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.995302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.995665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.995696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.996077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.996108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.996512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.996544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.996900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.996931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.997333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.997737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.997768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.998126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.998157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.998403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.998433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.998747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.998779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.999137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.999198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.999557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.999588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:39.999943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:39.999975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.000334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.000368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.000722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.000758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.001013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.001047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.001411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.001449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.001699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.002550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.002589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.002988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.003022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.003386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.003420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.003786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.003816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.004057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.004087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.004454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.004848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.004880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.005239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.005271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.005669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.763 [2024-11-20 10:48:40.005701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.763 qpair failed and we were unable to recover it. 00:31:07.763 [2024-11-20 10:48:40.006057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.006090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.006458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.006491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.006954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.006985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.007336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.007370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.007724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.007755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.008189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.008469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.008502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.008864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.008896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.009247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.009281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.009651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.009682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.010034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.010065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.010409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.010441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.010799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.010831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.011274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.011537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.011567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.011919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.011950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.012424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.012773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.012804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.013078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.013125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.013528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.013563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.013918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.013950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.014296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.014329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.014673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.014706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.015064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.015096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.015451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.015484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.015882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.016244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.016276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.016614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.016644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.017005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.017037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.017334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.017366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.017611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.017642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.018037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.018069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.018417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.018449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.764 [2024-11-20 10:48:40.018618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.764 [2024-11-20 10:48:40.018648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.764 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.018907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.018940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.019268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.019651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.019682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.020107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.020139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.020375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.020409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.020722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.021132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.021175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.021468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.021500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.021852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.021883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.022236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.022267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.022693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.022723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.023080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.023113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.023485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.023516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.023782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.023813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.024180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.024213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.024504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.024535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.024770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.024800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.025169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.025200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.025545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.025930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.025961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.026221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.026253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.026656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.026687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.027040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.027070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.027302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.027333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.027705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.027741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.028090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.028121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.028497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.028530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.028771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.028802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.029210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.029543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.029575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.029937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.029967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.030367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.030400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.030758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.030791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.031180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.031213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.031563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.031594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.031952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.031983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.032341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.032376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.765 [2024-11-20 10:48:40.032734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.765 [2024-11-20 10:48:40.032764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.765 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.033123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.033154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.033527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.033561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.033919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.033950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.034314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.034346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.034549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.034580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.034940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.034971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.035217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.035571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.035603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.035961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.035993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.036368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.036609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.036639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.036880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.036914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.037264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.037295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.037677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.037709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.037919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.037950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.038300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.038334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.038689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.038720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.039148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.039193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.039460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.039491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.039835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.039866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.040228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.040260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.040634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.040665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.041058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.041431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.041864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.042148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.042193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.042564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.042594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.042966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.042998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.043373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.043407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.043753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.043784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.044218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.044574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.044605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.044968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.044998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.045239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.045270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.045651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.045682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.046114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.046544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.046577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.766 qpair failed and we were unable to recover it. 00:31:07.766 [2024-11-20 10:48:40.046966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.766 [2024-11-20 10:48:40.046997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.047385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.047419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.047775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.047806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.048183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.048215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.048642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.048675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.048925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.048957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.049252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.049284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.049560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.049856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.050136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.050179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.050533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.050836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.050867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.051131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.051174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.051487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.051518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.051790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.051819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.052218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.052574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.052612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.052831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.052865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.053235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.053268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.053643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.053675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.054026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.054057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.054401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.054435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.054815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.054847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.055220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.055253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.055618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.055649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.055921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.055951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.056310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.056342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.056722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.056753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.057126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.057172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.057502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.057533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.057704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.057737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.058136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.058195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.058560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.058590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.058959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.058992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.059332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.059364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.059728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.059758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.060106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.060139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.767 qpair failed and we were unable to recover it. 00:31:07.767 [2024-11-20 10:48:40.060488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.767 [2024-11-20 10:48:40.060519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.060905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.061313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.061345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.061671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.061704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.062059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.062454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.062486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.062853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.062886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.063249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.063282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.063641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.064032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.064065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.064331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.064362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.064733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.064766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.065127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.065167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.065453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.065483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.065832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.065862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.066224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.066673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.066703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.067060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.067092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.067460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.067493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.067853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.067891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.068219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.068251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.068617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.068647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.068883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.068918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.069680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.069712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.070075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.070106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.070454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.070486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.070838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.071175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.071209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.071443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.071473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.071716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.071746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.072124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.072154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.072546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.072578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.072936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.072969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.073213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.073245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.073622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.073654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.073998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-20 10:48:40.074030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-20 10:48:40.074262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.074294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.074686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.074718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.075075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.075415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.075786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.075816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.076071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.076100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.076515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.076547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.076914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.077211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.077242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.077605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.077635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.077998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.078030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.078401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.078433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.078785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.078816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.079179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.079212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.079582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.079789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.079819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.080216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.080248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.080603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.080633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.080995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.081028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.081390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.081421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.081787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.081818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.082070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.082105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.082503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.082541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.082787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.082820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.083178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.083211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.083599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.083944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.083976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.084337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.084368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.084731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.084762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.085119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.085153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-20 10:48:40.085480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-20 10:48:40.085511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.085869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.085901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.086252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.086287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.086645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.086676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.087075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.087453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.087791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.087823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.088180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.088214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.088466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.088498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.088888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.088920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.089313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.089668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.089701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.090052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.090083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.090444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.090476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.090723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.090752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.091148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.091190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.091591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.091950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.091982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.092351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.092742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.092775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.093176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.093209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.093572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.093604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.093959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.093990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.094350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.094383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.094836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.094867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.095258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.095290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.095622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.095656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.095901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.095933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.096171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.096585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.096616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.096977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.097008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.097254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.097285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.097658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.097696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.098049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.098080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.098465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.098498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.098895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.098926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.099183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-20 10:48:40.099565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-20 10:48:40.099595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.099955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.099986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.100336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.100368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.100639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.101028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.101058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.101429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.101460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.101857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.101889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.102240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.102273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.102646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.102678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.103039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.103070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.103424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.103457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.103857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.103888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.104240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.104274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.104616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.104648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.105007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.105038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.105425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.105815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.105845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.106182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.106216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.106945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.106975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.107339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.107372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-20 10:48:40.107742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-20 10:48:40.107772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.108136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.108206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.108545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.108577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.108936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.108967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.109329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.109363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.109650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.110019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.110052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.110395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.110426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.110795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.110827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.111185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.111217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.111460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.111849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.112326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.112358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.112719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.112751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.113116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.113152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.113649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.114005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.114037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.114408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.114440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.114828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.115206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.115238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.115600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.115632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.115982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.116012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.116382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.116414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.116708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.116739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.117137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.117486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.117518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.117879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.117910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.118178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.118209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.118583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.118974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.119395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.119428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.119716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.119746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.120114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.120144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.120444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.049 [2024-11-20 10:48:40.120475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.049 qpair failed and we were unable to recover it. 00:31:08.049 [2024-11-20 10:48:40.120839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.120869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.121144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.121189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.121475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.121506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.121823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.121854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.122224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.122256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.122515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.122546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.122728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.122759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.123140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.123189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.123437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.123467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.123883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.124251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.124284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.124665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.124695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.125138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.125554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.125585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.125939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.125970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.126309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.126342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.126721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.126751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.127117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.127148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.127505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.127535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.127899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.127930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.128291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.128329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.128686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.128717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.129106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.129480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.129512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.129863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.129894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.130258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.130290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.130662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.130693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.130939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.130971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.131323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.131354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.131709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.131739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.132092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.132524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.132557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.132916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.132948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.133302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.133334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.133686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.133717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.134059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.134092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.134325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.134359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.134685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.134717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.135071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.135104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.135445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.135476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.135832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.135863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.136292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.136325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.136724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.137081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.137113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.137510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.137542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.137920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.137950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.138310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.138342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.138700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.138731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.139099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.139130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.139508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.139540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.139899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.139931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.140288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.050 [2024-11-20 10:48:40.140321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.050 qpair failed and we were unable to recover it. 00:31:08.050 [2024-11-20 10:48:40.140676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.140707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.141070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.141101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.141311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.141346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.141703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.141733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.142090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.142120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.142481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.142513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.142863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.142894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.143147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.143192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.143539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.143576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.143927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.143958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.144217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.144251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.144629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.144658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.145024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.145054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.145406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.145438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.145791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.145821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.146565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.146596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.146945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.146974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.147333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.147366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.147698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.147731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.148078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.148109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.148485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.148875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.148906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.149265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.149297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.149649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.149680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.149932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.150297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.150328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.150725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.151109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.151138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.151465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.151497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.151856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.151886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.152234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.152265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.152631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.152663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.153021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.153052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.153391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.153424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.153794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.154179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.154212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.154571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.154602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.154967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.154997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.155335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.155367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.155762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.155793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.156140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.156179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.156410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.156441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.156738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.156772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.157128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.157567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.157599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.157982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.158013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.158269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.158302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.158537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.158578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.158945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.158976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.159375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.159408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.159765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.159798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.160059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.160091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.051 qpair failed and we were unable to recover it. 00:31:08.051 [2024-11-20 10:48:40.160471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.051 [2024-11-20 10:48:40.160502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.160926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.161278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.161311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.161672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.161702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.162075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.162106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.162448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.162481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.162828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.162859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.163170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.163202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.163609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.163639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.163998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.164314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.164347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.164741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.164773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.165134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.165174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.165506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.165537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.165882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.165913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.166267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.166299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.166699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.166962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.166994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.167337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.167369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.167729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.168129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.168173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.168558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.168590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.169003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.169034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.169421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.169454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.169807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.169838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.170196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.170228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.170677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.170707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.171112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.171144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.171563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.171595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.171961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.172338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.172371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.173129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.173170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.173489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.173520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.173874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.173905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.174200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.174242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.174500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.174533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.174787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.174817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.175072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.175102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.175475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.175856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.175888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.176254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.176287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.176532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.176564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.177014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.177045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.177309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.177341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.177713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.177744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.052 [2024-11-20 10:48:40.178125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.052 [2024-11-20 10:48:40.178155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.052 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.178544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.178576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.178942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.178972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.179335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.179367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.179760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.179791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.180172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.180205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.180566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.180596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.180972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.181003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.181429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.181461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.181829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.181860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.182107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.182138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.182440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.182474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.182709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.182740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.183088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.183119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.183522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.183556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.183915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.183947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.184360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.184393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.184722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.184752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.185101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.185133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.185528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.185560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.185924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.185955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.186301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.186695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.186725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.187107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.187138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.187568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.187600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.187965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.187995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.188388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.188419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.188775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.188806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.189178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.189211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.189447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.189839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.189871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.190117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.190147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.190560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.190593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.190946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.190978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.191347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.191381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.191762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.191794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.192025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.192057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.192327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.192363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.192711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.192742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.192998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.193031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.193394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.193810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.193841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.194212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.194243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.194624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.194656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.194883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.194915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.195287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.195319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.195672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.195704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.196075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.196105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.196462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.196495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.196864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.196894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.197236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.197269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.197625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.197657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.198012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.198043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.198446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.053 [2024-11-20 10:48:40.198480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.053 qpair failed and we were unable to recover it. 00:31:08.053 [2024-11-20 10:48:40.198839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.198870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.199226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.199259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.199609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.200019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.200417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.200449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.200835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.200866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.201210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.201615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.201646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.202005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.202036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.202401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.202432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.202791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.202822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.203181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.203213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.203548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.203579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.203821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.204252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.204504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.204546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.204897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.204929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.205062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.205093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.205517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.205549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.205802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.205834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.206180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.206212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.206582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.206973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.207003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.207246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.207281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.207711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.207742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.208101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.208132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.208535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.208567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.208927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.208956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.209202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.209234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.209609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.209641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.210042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.210072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.210420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.210452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.210838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.211189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.211220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.211583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.211613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.212039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.212070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.212427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.212702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.212734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.212991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.213020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.213378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.213793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.213824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.214057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.214519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.214552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.214915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.214945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.215300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.215333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.215694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.215724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.216084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.216114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.216343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.216376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.216748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.216779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.217138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.217183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.217548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.217578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.217935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.217965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.218323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.218355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-20 10:48:40.218708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-20 10:48:40.218741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.219084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.219114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.219372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.219410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.219824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.219854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.220215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.220636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.221016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.221046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.221419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.221452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.221796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.221827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.222183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.222215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.222551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.222584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.222944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.222975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.223329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.223361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.223737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.223770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.224130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.224177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.224550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.224580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.224937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.224969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.225357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.225390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.225746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.225777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.226137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.226181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.226494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.226524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.226884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.226915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.227268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.227303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.227703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.227733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.228095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.228126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.228518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.228553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.228899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.228929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.229315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.229685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.229715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.230067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.230098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.230460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.230493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.230834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.230865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.231229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.231263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.231653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.231684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.232078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.232109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.232479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.232512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.232913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.232943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.233334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.233366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.233737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.233768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.234123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.234154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.234530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.234562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.234801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.235207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.235238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.235635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.235667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.236043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.236074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.236429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.236462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.236891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.237274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-20 10:48:40.237639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-20 10:48:40.237671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.238029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.238417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.238451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.238809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.238838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.239197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.239230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.239577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.239954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.239985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.240343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.240376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.240739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.240770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.241141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.241184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.241414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.241449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.241799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.241829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.242179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.242211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.242588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.242619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.242979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.243010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.243391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.243424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.243779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.243809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.244155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.244197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.244552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.244583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.245016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.245047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.245302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.245334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.245577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.245614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.245962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.246339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.246370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.246641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.246676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.247024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.247057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.247416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.247448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.247808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.247840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.248234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.248268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.248620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.248652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.249080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.249113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.249503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.249537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.249893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.249923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.250315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.250674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.250704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.251067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.251099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.251459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.251492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.251843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.251874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.252237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.252270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.252633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.252664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.253032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.253062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.253318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.253759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.254150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.254191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.254573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.254605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.254954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.254985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.255338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.255371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.255740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.255770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.256128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.256180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.256597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.256629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.256993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.257025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.257266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.257302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.257681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.257714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-20 10:48:40.258069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-20 10:48:40.258100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.258458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.258846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.258879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.259240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.259273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.259659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.260039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.260072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.260424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.260455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.260730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.261112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.261503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.261536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.261893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.261924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.262284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.262317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.262685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.262717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.263084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.263115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.263524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.263556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.263934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.263967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.264334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.264367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.264724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.264756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.265112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.265144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.265504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.265536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.265903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.265936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.266287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.266320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.266688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.266720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.266948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.266982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.267275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.267307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.267699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.267731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.267975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.268007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.268375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.268407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.269151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.269194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.269543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.269574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.269975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.270007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.270363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.270395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.270750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.270781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.271145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.271185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.271439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.271471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.271822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.271851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.272217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.272249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.272541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.272572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.272926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.273322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.273733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.274173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.274604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.274634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.274985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.275016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.275350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.275382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.275737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.275767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.276120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.276152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.276529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.276566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.276702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.276737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.277104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.277134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.277510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.277542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.277912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-20 10:48:40.277942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-20 10:48:40.278300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.278331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.278692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.278722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.279109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.279140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.279509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.279540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.279803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.280203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.280593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.280625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.280991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.281022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.281355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.281388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.281799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.281830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.282103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.282507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.282540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.282889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.283281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.283313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.283675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.283706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.284070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.284475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.284506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.284868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.284899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.285239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.285272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.285639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.285669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.286031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.286062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.286423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.286455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.286705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.286736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.287092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.287121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.287898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.287929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.288322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.288717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.288956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.288990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.289379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.289808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.289839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.290197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.290230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.290579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.290610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.290969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.291000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.291368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.291400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.291761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.291798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.292194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.292226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.292461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.292493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.292875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.292905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.293275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.293309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.293674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.293706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.294056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.294086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.294450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.294483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.294836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.294867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.295222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.295253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.295620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.295650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.296022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.296054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.296454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.296808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.296839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.297237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.297271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.297631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.297661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.298011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.298041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.298418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.298450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-20 10:48:40.298812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-20 10:48:40.298842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.299080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.299111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.299527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.299558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.299911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.299942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.300346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.300380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.300777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.300809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.301170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.301202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.301556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.301586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.301946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.301977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.302278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.302311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.302693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.302723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.303075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.303107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.303486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.303518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.303910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.303939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.304285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.304317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.304679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.304710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.305056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.305087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.305421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.305453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.305824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.305855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.306215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.306246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.306616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.306647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.307076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.307107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.307453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.307491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.307745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.308126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.308157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.308530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.308561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.308916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.308946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.309327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.309359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.309738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.310106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.310136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.310541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.310574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.310926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.310956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.311313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.311346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.311723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.311754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.312110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.312141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.312524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.312556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.312913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.312943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.313384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.313416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.313803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.314179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.314211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.314455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.314484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.314887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.315245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.315277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.315658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.316021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.316051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.316417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.316448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-20 10:48:40.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-20 10:48:40.316862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.317233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.317265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.317640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.317671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.318018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.318049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.318408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.318440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.318794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.318825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.319090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.319120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.319515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.319548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.319905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.319936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.320290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.320323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.320681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.320713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.321071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.321104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.321486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.321519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.321875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.321905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.322256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.322289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.322644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.322674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.323036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.323440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.323473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.323833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.323864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.324248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.324585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.324615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.324964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.324996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.325352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.325384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.325747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.325778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.326138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.326187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.326531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.326563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.326951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.327320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.327352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.327708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.327738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.327988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.328019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.328391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.328424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.328814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.328844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.329188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.329221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.329586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.329617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.329982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.330013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.330374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.330406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.330759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.330789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.331137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.331178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.331543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.331574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.331934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.331966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.332340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.332371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.332740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.332770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.333127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.333167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.333549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.333905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.333936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.334293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.334326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.334682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.334715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.335074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.335105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.335469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.335501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.335800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.335832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.336196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.336230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.336453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.336485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.336737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.336772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.337125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-20 10:48:40.337155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-20 10:48:40.337566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.337599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.337965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.337998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.338362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.338401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.338753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.338785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.339058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.339088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.339449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.339481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.339835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.339867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.340229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.340262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.340632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.340663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.341019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.341050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.341401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.341433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.341786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.341818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.342177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.342209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.342608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.342639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.342994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.343025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.343358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.343392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.343748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.343781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.344132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.344177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.344431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.344463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.344898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.344928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.345182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.345215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.345622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.345983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.346014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.346421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.346454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.346804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.347226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.347258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.347609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.347641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.348007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.348038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.348276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.348311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.348677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.348709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.349099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.349130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.349377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.349408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.349759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.349790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.350156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.350203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.350604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.350634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.351079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.351109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.351508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.351542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.351903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.351934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.352294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.352326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.352681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.352713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.353065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.353097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.353455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.353487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.353845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.353884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.354236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.354269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.354613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.354644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.354879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.354913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.355267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.355300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.355651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.355682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.356038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.356069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.356436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.356469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.356832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.357221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.357623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.357656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-20 10:48:40.358011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-20 10:48:40.358041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.358393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.358425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.358792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.358823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.359215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.359249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.359607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.359638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.359890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.359925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.362581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.362651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.363085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.363437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.363471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.363826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.363857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.364212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.364246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.364631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.364661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.365032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.365064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.365460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.365493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.365846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.365878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.366236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.366270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.366558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.366592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.366940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.366972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.367330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.367363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.367732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.367763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.368120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.368151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.368434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.368470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.368832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.368863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.369256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.369618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.369650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.369907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.369938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.370321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.370355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.370711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.370743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.371100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.371131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.371487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.371526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.371913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.372299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.372332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.372699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.372732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.373088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.373120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.373479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.373513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.373744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.373774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.374049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.374081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.374482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.374846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.374876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.375278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.375310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.375685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.375716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.376055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.376445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.376479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.376828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.376864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.377218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.377251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.377608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.377640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.378080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.378114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.378498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-20 10:48:40.378531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-20 10:48:40.378774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.378808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.379154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.379198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.379545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.379577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.379936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.379967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.380310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.380344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.380717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.380747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.381102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.381135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.381440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.381472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.381841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.381874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.382240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.382275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.382529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.382561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.382916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.382948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.383297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.383329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.383680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.383713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.384105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.384136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.384498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.384531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.384782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.384813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.385198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.385231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.385540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.385571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.385923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.385956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.386317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.386349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.386740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.387120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.387400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.387438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.387809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.387839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.388179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.388212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.388617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.388971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.389002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.389375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.389406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.389804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.389837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.390204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.390237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.390614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.391018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.391051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.391401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.391432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.391710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.391740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.392194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.392229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.392611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.392643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.393027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.393059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.393414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.393447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.393817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.393847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.394139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.394558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.394588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.394950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.394981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.395345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.395379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.395731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.395763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.396157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.396201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.396577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.396609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.396968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.396999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.397593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.397715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.398056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.398097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.398586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.398696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.399180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.399220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-20 10:48:40.399490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-20 10:48:40.399523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.399890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.399923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.400431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.400541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.400966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.401007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.401294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.401333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.401692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.401724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.402022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.402054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.402399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.402436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.402794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.402826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.403190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.403222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.403619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.404007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.404393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.404426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.404666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.404701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.405059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.405090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-20 10:48:40.405475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-20 10:48:40.405506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.405843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.340 [2024-11-20 10:48:40.405877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.340 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.406232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.340 [2024-11-20 10:48:40.406267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.340 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.406633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.340 [2024-11-20 10:48:40.406664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.340 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.407010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.340 [2024-11-20 10:48:40.407041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.340 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.407404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.340 [2024-11-20 10:48:40.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.340 qpair failed and we were unable to recover it. 00:31:08.340 [2024-11-20 10:48:40.407797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.407829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.408185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.408218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.408596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.408636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.408985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.409015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.409377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.409408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.409795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.409826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.410177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.410210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.410575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.410605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.410961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.410992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.411347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.411379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.411736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.411766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.412137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.412183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.412577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.412607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.412852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.412886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.413277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.413308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.413624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.413656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.414002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.414033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.414295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.414327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.414684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.414714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.415084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.415114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.415511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.415867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.415897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.416245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.416277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.416653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.416684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.417031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.417060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.417404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.417438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.417803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.417834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.418190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.418222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.418570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.418601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.418958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.419005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.419359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.419391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.419738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.419771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.420172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.420204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.420561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.420592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.420955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.421343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.421376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.421735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.341 [2024-11-20 10:48:40.421767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.341 qpair failed and we were unable to recover it. 00:31:08.341 [2024-11-20 10:48:40.422129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.422169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.422536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.422922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.422953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.423312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.423343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.423726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.423757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.424123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.424154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.424520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.424907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.424937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.425307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.425341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.425703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.425734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.426084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.426114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.426503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.426536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.426883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.426914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.427322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.427630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.428019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.428054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.428464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.428496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.428825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.428859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.429250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.429662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.429699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.430090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.430484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.430516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.430873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.430903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.431273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.431305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.431669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.431701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.432068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.432099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.432464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.432857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.432887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.433235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.433269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.433627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.434017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.434047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.434406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.434439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.434783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.435217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.435581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.435614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.435989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.436019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.436255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.436293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.436711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.436743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.342 qpair failed and we were unable to recover it. 00:31:08.342 [2024-11-20 10:48:40.437090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.342 [2024-11-20 10:48:40.437121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.437485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.437518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.437887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.437917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.438275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.438308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.438665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.439071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.439102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.439448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.439480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.439830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.439865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.440200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.440233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.440634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.440665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.441019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.441049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.441418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.441452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.441836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.442186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.442218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.442467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.442500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.442977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.443331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.443363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.443727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.443759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.444112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.444549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.444901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.444933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.445292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.445323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.446139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.446179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.446457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.446488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.446844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.446875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.447327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.447711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.447743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.448107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.448139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.448485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.448517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.448885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.448918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.449328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.449363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.449724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.449754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.450113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.450145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.450489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.450521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.450774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.450808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.451184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.451218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.451587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.451618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.343 qpair failed and we were unable to recover it. 00:31:08.343 [2024-11-20 10:48:40.451979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.343 [2024-11-20 10:48:40.452009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.452367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.452398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.452793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.453136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.453176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.453513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.453907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.453939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.454395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.454426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.454781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.454812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.455537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.455885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.455915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.456141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.456191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.456578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.456610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.456866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.456901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.457312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.457649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.457680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.457914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.457944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.458307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.458339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.458728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.458757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.459100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.459131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.459484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.459515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.459848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.459879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.460226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.460260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.460606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.460635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.461002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.461032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.461417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.461449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.461825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.461856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.462211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.462243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.462624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.462988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.463019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.463387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.463418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.463784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.464031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.464063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.464411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.464443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.464818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.464848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.465219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.465251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.465533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.465564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.465921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.465952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.344 [2024-11-20 10:48:40.466291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.344 [2024-11-20 10:48:40.466329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.344 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.466684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.466715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.467093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.467124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.467497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.467529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.467918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.467948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.468333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.468366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.468724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.468756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.469004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.469035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.469421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.469453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.469880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.470241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.470273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.470642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.470673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.471025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.471055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.471401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.471433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.471797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.471830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.472210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.472240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.472609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.472639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.473012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.473043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.473392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.473786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.473816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.474178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.474559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.474590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.474835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.474869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.475287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.475664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.476029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.476059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.476415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.476446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.476798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.477202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.477235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-20 10:48:40.477580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-20 10:48:40.477612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.477969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.478384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.478415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.478663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.478693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.479028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.479059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.479310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.479340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.479686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.479718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.480057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.480087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.480427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.480460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.480818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.480849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.481202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.481234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.481607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.481637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.481987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.482019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.482664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.482697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.483078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.483108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.483469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.483502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.483852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.483882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.484255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.484287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.484654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.484685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.485037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.485068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.485440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.485473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.485819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.485848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.486189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.486222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.486569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.486598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.486955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.486985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.487272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.487304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.487644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.487674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.488039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.488069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-20 10:48:40.488409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-20 10:48:40.488443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.488666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.488700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.489050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.489079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.489442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.489475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.489831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.489864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.490288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.490320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.490675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.491136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.491178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.491576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.491975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.492008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.492248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.492287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.492635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.492665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.493021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.493052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.493415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.493446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.493681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.493714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.494072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.494103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.494465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.494498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.494856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.494886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.495241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.495274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.495641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.495672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.496051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.496411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.496443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.496799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.497182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.497214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.497605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.497995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.498401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.498432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.498783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.498814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.499150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.499193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.499527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-20 10:48:40.499556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-20 10:48:40.499906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.499936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.500193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.500226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.500632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.500662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.500907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.500939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.501259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.501292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.501661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.501691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.502056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.502086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.502332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.502371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.502770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.502801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.503144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.503185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.503555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.503587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.503964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.503994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.504383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.504782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.505139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.505178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.505559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.505590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.505955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.506341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.506373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.506746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.506776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.507020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.507049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.507416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.507447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.507795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.508181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.508574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.508605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.508947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.508978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.509366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.509714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.509746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.509997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.510032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.510392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.510423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.510758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.511137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.511198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.511425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.511458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.511805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.511837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.512194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.512227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.512573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.512610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.512936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.512967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-20 10:48:40.513317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-20 10:48:40.513349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.513705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.513736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.514090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.514120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.514516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.514550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.514903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.514932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.515294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.515326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.515582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.515612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.515960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.515990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.516343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.516375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.516734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.516765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.517115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.517145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.517494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.517525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.517863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.517896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.518128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.518183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.518592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.518921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.518952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.519297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.519330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.519691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.519722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.520096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.520456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.520488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.520869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.520899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.521256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.521288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.521649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.521680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.522037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.522067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.522408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.522440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.522786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.522816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.523225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.523256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.523612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.523643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.523999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.524030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.524478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.524833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.524863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.525274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.525306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.525663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.525695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.526048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.526079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.526428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.526459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.526802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.526832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.527231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.527619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.527651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-20 10:48:40.528029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-20 10:48:40.528061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.528391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.528779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.528810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.529154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.529194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.529588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.529618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.529959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.529991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.530346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.530377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.530664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.530695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.531140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.531179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.531524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.531553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.531917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.531947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.532310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.532343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.532695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.532728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.533076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.533107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.533365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.533400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.533778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.533809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.534181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.534212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.534570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.534600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.534947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.534980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.535336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.535368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.535722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.535753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.535992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.536022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.536355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.536387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.536762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.536794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.537136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.537497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.537527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.537868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.537898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.538264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.538303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.538672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.538709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.539063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.539095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.539446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.539479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.539841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.539872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-20 10:48:40.540267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-20 10:48:40.540299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.540694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.541053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.541441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.541474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.541830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.541861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.542228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.542261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.542554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.542584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.542933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.542963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.543341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.543371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.543727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.543757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.544004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.544036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.544388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.544420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.544766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.545156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.545197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.545596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.545626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.545974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.546007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.546270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.546740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.547083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.547114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.547469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.547502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.547845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.547876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.548145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.548187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.548583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.548614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.548968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.549005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.549448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.549806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.549837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.550078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.550113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.550488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.550523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.550872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.550903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.551146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.551189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.551542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.551573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.551827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.551856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.552199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.552231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.552523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.552554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.552895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.552926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.553277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.553310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.553670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.553701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.554034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-20 10:48:40.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-20 10:48:40.554414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.554446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.554795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.554827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.555191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.555224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.555585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.555615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.555955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.555985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.556260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.556291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.556666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.556698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.557047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.557078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.557402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.557435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.557870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.557901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.558239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.558270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.558554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.558585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.559029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.559059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.559459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.559492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.559855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.559886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.560248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.560281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.560662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.561031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.561062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.561322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.561357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.561688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.561721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.562090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.562121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.562486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.562519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.562866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.562896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.563258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.563290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.563560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.563901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.563933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.564298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.564332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.564695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.564726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.565078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.565108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.565473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.565505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.566238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.566270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.566525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.566948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-20 10:48:40.567316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-20 10:48:40.567347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.567683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.567713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.568069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.568099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.568456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.568489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.568867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.568897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.569260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.569293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.569683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.569713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.570066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.570096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.570474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.570506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.570834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.570866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.571218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.571251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.571634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.571665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.572012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.572042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.572384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.572416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.572779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.572809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.573057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.573089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.573284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.573318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.573692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.573722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.574080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.574110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.574446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.574715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.574747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.575115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.575146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.575549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.575938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.575969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.576236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.576268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.576660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.576691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.577048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.577079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.577437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.577469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.577843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.577875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.578112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.578148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.578537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.578568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.578915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.578945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.579307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.579340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.579686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.579717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.580078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.580109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.580490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.580524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.580865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.580895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.581249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.581282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-20 10:48:40.581653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-20 10:48:40.581684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.582032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.582064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.582425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.582457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.582696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.582731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.583071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.583101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.583882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.583913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.584257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.584288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.584535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.584572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.584932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.584962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.585299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.585330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.585698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.585729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.586087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.586118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.586400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.586432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.586735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.586766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.587228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.587261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.587639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.587993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.588023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.588399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.588432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.588773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.588805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.589152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.589211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.589572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.589603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.589942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.589972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.590332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.590738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.590770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.591019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.591050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.591388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.591421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.591768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.591799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.592146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.592186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.592549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.592579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.592919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.593299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.593332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.593695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.593726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.594086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.594117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.594515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.594547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.594933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.595281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.595313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.595673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-20 10:48:40.596056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-20 10:48:40.596087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.596442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.596474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.596831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.596862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.597205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.597237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.597577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.597607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.597953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.597984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.598337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.598369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.598723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.598753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.599009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.599044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.599396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.599428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.599751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.599782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.600172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.600204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.600557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.600589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.600938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.600968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.601327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.601360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.601717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.601749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.602092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.602122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.602467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.602499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.602967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.603006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.603403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.603442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.603797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.603828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.604220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.604254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.604617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.604650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.605010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.605041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.605396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.605704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.605737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.606083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.606474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.606506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.606861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.606893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.607258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.607290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-20 10:48:40.607676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-20 10:48:40.607706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.608055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.608086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.608427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.608460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.608807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.608839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.609196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.609229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.609480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.609511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.609803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.609834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.610201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.610232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.610582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.610619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.610963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.610995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.611345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.611377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.611748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.611781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.612128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.612201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.612553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.612584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.612940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.612970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.613316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.613348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.613592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.613627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.614005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.614346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.614378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.614738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.614769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.615146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.615430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.615461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.615726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.615759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.616096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.616376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.616412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.616769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.616800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.617150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.617196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.617570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.617601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.617962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.617994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.618343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.618377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.618740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.618772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.619123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.619153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.619519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.619550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.619898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.619932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.620291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.620730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.620767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.620969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.621004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.621355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.621386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-20 10:48:40.621747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-20 10:48:40.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.622137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.622176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.622430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.622459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.622813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.622843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.623194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.623227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.623608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.623638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.623981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.624011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.624345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.624377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.624743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.624776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.625153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.625193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.625566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.625596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.625956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.625988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.626350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.626382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.626731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.626763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.627189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.627222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.627582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.627612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.627963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.627993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.628437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.628469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.628843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.628874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.629226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.629258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.629641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.629671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.630015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.630045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.630407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.630768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.630798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.631174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.631212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.631608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.631640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.631989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.632020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.632243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.632279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.632637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.632668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.633022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.633052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.633419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.633453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.633793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.633825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.634182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.634215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.634469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.634501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.634861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.634890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.635244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.635277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.635629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.635659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.636014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-20 10:48:40.636045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-20 10:48:40.636399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.636431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.636787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.636817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.637186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.637218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.637570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.637600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.637963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.637996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.638258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.638290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.638674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.638704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.639073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.639104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.639449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.639482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.639832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.639862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.640220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.640252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.640528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.640561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.640902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.640934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.641296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.641329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.641697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.641728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.642069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.642099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.642477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.642509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.642857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.642889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.643134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.643172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.643526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.643556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.643909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.643939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.644296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.644330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.644774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.644806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2244270 Killed "${NVMF_APP[@]}" "$@" 00:31:08.358 [2024-11-20 10:48:40.645173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.645206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.645559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.645590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.645946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:08.358 [2024-11-20 10:48:40.645977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.646332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.646366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.358 [2024-11-20 10:48:40.646702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.646734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.358 [2024-11-20 10:48:40.647077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.647111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.358 [2024-11-20 10:48:40.647501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.647533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.647881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.647912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.648257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.648289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.648666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-20 10:48:40.648697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-20 10:48:40.649097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.649128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.649540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.649575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.649945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.649976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.650364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.650712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.650742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.651102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.651134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.651566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.651598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.651965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.651997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.652225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.652260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.652635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.652667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.653033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.653064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.653497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.653530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.653870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.654259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.654291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.654695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.654726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.655077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.655109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.655417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.655448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2245131 00:31:08.359 [2024-11-20 10:48:40.655747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.655779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2245131 00:31:08.359 [2024-11-20 10:48:40.656179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.656215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2245131 ']' 00:31:08.359 [2024-11-20 10:48:40.656593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.656625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.359 [2024-11-20 10:48:40.656865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.359 [2024-11-20 10:48:40.657199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.359 [2024-11-20 10:48:40.657232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.359 [2024-11-20 10:48:40.657506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.657539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 10:48:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.359 [2024-11-20 10:48:40.657901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.657935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.658296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.658329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.658712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.658746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.659132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.659171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.659559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.659598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.660039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.660072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.660424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.660458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.660824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.660857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.661215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.661250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.661628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-20 10:48:40.661660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-20 10:48:40.662015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.662048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.662432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.662464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.662832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.662864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.663221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.663257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.663655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.664015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.664047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.664404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.664437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.664670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.664701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.665090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.665123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.665516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.665549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.665901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.665933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.666303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.666341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.666727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.666759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.667010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.667042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.667284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.667320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.667604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.667637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.667981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.668013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.668429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.668463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.668814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.669209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.669242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.669631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.669663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.669998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.670038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.670405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.670696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.670729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.671076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.671108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.671455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.671489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.671854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.671890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.672289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.672323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.672677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.672710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.673074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.673106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.673489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.673522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-20 10:48:40.673893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-20 10:48:40.673926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.674284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.674317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.674691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.674723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.675078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.675109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.675541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.675575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.675942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.675975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.676237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.676270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.676687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.677056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.677088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.677428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.677462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.677816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.677847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.678196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.678229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.678428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.678462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.678808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.678838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.679186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.679218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.679596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.679627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.679998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.680030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.680464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.680495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.680866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.680903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.681198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.681565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.681931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.681962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.682321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.682731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.682762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.683130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.683169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.683347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.683377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.683644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.683909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.684178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.684583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.684614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.684966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.684997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.685357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.685390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.685701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.686067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.686099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.686377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.686413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.686777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.686809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-20 10:48:40.687170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-20 10:48:40.687203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.687557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.687587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.687963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.687995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.688359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.688391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.688761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.688791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.689171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.689204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.689511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.689541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.689926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.689958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.690332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.690366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.690736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.690767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.691126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.691156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.691633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.691664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.692009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.692041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.692311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.692566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.692596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.692942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.692972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.693341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.693373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.693731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.693762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.694120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.694150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.694577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.694979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.695011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.695410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.695444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.695765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.695805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.696069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.696102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.696479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.696512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.696884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.696916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.697269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.697303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.697676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.697706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-20 10:48:40.698073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-20 10:48:40.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.698469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.698867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.698900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.699243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.699276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.699667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.699699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.700072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.700104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.700472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.700505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.700858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.700890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.701148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.701189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.701560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.701591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.701824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.701856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.702131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.702566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.702597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.702980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.703311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.703345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.703692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.703723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.703959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.704420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.704801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.704833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.705208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.705240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.705620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.705652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.705989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.706029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.706332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.706366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.706751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.706783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.707180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.707213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.707606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.707637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.708023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.708055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.708427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.708460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.708842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.708873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.709223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.709255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.709614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.709646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.710008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.710040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.710413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.710446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.710705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.710736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.711074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.711105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.711505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.711539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-20 10:48:40.711894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-20 10:48:40.711925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.712285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.712317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.712621] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:31:08.642 [2024-11-20 10:48:40.712674] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.642 [2024-11-20 10:48:40.712706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.712736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.713095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.713125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.713554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.713585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.713959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.713991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.714388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.714421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.714669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.714704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.714935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.714967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.715352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.715386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.715759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.715791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.716180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.716219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.716672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.716705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.716971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.717003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.717462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.717496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.717866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.717898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.718305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.718688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.718720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.719088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.719120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.719501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.719535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.719784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.719816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.720066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.720098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.720475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.720929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.720962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.721327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.721361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.721641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.721673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.722043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.722076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.722507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.722867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.722899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.723263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.723296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.723666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.723699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.724056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.724088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.724454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-20 10:48:40.724487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-20 10:48:40.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.724884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.725268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.725301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.725704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.725738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.726109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.726142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.726520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.726553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.726936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.726982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.727344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.727376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.727761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.728146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.728189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.728540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.728571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.728959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.729351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.729803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.730183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.730216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.730580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.730613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.730983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.731015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.731412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.731444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.731818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.731848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.732090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.732122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.732519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.732552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.732879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.732911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.733136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.733193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.733444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.733478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.733736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.733768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.734053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.734084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.734432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.734465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.734844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.735201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.735234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.735353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.735382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.735736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.735765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.736033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.736064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.736460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.736492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.736868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.736901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.737142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.737184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.737545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.737577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.737945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.737975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.738346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.738378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.738614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-20 10:48:40.738645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-20 10:48:40.739015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.739045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.739402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.739800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.739833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.740205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.740236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.740642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.740672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.740902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.740932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.741172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.741207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.741537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.741568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.741815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.741849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.742210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.742242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.742646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.742677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.743029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.743061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.743428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.743459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.743820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.743851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.744237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.744269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.744638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.744669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.745045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.745076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.745412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.745450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.745786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.745818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.746195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.746228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.746613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.746644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.747013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.747044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.747415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.747447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.747839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.748186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.748218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.748591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.748624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.748963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.748994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.749335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.749369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.749705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.749735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.750434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.750467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.750809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.750842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.751190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.751222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.751565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.751596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.751991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.752022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.752396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.752434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-20 10:48:40.752802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-20 10:48:40.752835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.753180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.753213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.753629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.753659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.754023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.754055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.754431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.754463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.754826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.754857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.755213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.755245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.755629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.755660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.756001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.756033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.756388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.756419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.756779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.756810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.757168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.757201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.757604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.757984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.758016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.758376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.758408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.758761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.758791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.759148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.759189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.759485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.759517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.759864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.759895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.760235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.760268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.760630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.760663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.761016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.761047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.761316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.761349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.761730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.762112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.762144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.762519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.762552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.762926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.762962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.763208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-20 10:48:40.763241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-20 10:48:40.763625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.763655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.764003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.764035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.764499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.764531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.764861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.764891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.765285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.765317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.765689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.765720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.766063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.766094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.766445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.766478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.766710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.767121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.767153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.767531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.767562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.767907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.767938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.768277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.768309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.768670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.768700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.769050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.769081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.769471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.769834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.769865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.770199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.770231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.770587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.770617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.770972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.771004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.771251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.771287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.771555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.771585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.771924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.771954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.772303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.772335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.772742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.773089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.773127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.773496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.773527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.773855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.773886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.774271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.774621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.775019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.775048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.775285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.775318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.775684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.775714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.776054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.776085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.776422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.776816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.776846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-20 10:48:40.777211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-20 10:48:40.777243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.777618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.777651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.777786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.777821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.778181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.778575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.778606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.778962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.778993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.779341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.779373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.779704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.779735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.780084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.780116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.780496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.780529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.780879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.780909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.781282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.781317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.781656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.781686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.782042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.782074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.782327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.782361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.782667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.782697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.783036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.783066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.783425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.783458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.783803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.783834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.784191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.784223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.784578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.784609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.784959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.784990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.785415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.785447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.785775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.785806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.786150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.786193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.786495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.786527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.786873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.786904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.787266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.787299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.787637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.787668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.787910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.787941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.788305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.788337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.788708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.789059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.789090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.789484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.789822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.789854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.790252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.790283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.790555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.790932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-20 10:48:40.790963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-20 10:48:40.791331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.791364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.791732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.791763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.792118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.792148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.792523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.792553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.792903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.792935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.793283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.793314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.793662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.793693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.794065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.794097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.794454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.794486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.794834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.794865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.795227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.795260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.795513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.795546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.795884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.795914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.796293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.796324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.796669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.796699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.797065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.797094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.797434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.797466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.797820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.797852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.798080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.798110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.798518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.798555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.798774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.798806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.799149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.799192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.799545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.799574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.799840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.799874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.800231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.800264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.800614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.800645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.800989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.801019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.801389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.801421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.801767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.801798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.802136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.802177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.802420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.802451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.802812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.802843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.803197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-20 10:48:40.803230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-20 10:48:40.803598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.803632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.804023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.804053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.804316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.804688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.804719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.805076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.805108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.805444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.805823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.805854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.806211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.806244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.806463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.806493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.806844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.806875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.807241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.807274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.807630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.807661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.808029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.808059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.808413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.808450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.808801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.808835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.809064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.649 [2024-11-20 10:48:40.809197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.809502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.809534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.809874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.809904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.810284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.810316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.810539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.810575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.810833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.810863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.811227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.811259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.811634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.811665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.812015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.812045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.812412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.812443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.812783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.812813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.813045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.813431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.813808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.813838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.814199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.814230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.814464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.814495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.814840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.814872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.815253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.815629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.815660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.816011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.816043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.816373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.816404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.816621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.816652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-20 10:48:40.816933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-20 10:48:40.816964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.817322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.817353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.817697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.817728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.817964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.818001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.818344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.818375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.818607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.818638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.818992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.819024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.819391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.819423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.819762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.819792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.820028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.820060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.820441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.820474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.820843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.820874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.821131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.821171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.821403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.821437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.821715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.821745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.822082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.822112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.822475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.822875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.822906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.823237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.823270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.823621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.823652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.824057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.824090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.824456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.824489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.824846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.824876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.825116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.825150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.825412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.825443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.825808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.825839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.826195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.826228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.826597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.826627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.826971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.827001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.827263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.827295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.827514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.827905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.827936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.828171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-20 10:48:40.828205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-20 10:48:40.828579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.828983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.829013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.829411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.829442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.829806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.829838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.830210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.830608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.830639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.830963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.830993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.831381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.831413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.831750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.831781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.832038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.832071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.832282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.832315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.832673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.832705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.833057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.833089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.833346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.833379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.833745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.833775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.834190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.834221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.834579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.834609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.834877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.834908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.835238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.835270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.835631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.835661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.836016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.836048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.836380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.836655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.836686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.837032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.837064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.837420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.837458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.837787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.837818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.838055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.838084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.838432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.838463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.838806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.838838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.839173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.839205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.839534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.839565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.839925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.839955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.840308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.840341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.840674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.840704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.841063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.841093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.841472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.841505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.841851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.841881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-20 10:48:40.842222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-20 10:48:40.842253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.842611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.842644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.842881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.842911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.843266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.843298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.843660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.843691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.844008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.844413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.844445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.844792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.844823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.845194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.845226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.845587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.845617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.845960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.845992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.846274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.846306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.846667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.846698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.847049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.847080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.847418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.847680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.847710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.848052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.848084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.848455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.848487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.848850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.849197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.849228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.849548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.849578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.849939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.849970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.850318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.850350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.850719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.850750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.851094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.851126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.851507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.851538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.851876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.851907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.852268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.852722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.853092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.853123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.853504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.853537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.853911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.853942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.854280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.854314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.854664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.854696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.854939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.854970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.855337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.855370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-20 10:48:40.855403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.652 [2024-11-20 10:48:40.855442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.652 [2024-11-20 10:48:40.855450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.652 [2024-11-20 10:48:40.855457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.652 [2024-11-20 10:48:40.855463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.652 [2024-11-20 10:48:40.855705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-20 10:48:40.855736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.856070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.856101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.856341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.856373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.856611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.856641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.856925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.856956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.857317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.857352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.857302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:08.653 [2024-11-20 10:48:40.857437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:08.653 [2024-11-20 10:48:40.857593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:08.653 [2024-11-20 10:48:40.857596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.653 [2024-11-20 10:48:40.857677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.857706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.857980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.858010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.858339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.858372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.858705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.858736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.859088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.859119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.859422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.859454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.859823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.859854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.860224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.860257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.860506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.860536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.860782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.860813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.861184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.861217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.861572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.861603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.861956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.861986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.862337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.862369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.862720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.862751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.863017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.863398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.863430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.863793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.863823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.864141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.864183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.864540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.864880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.864910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.865124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.865154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.865401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.865437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.865777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.865807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.866157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.866205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.866561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.866593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.866963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.866992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.867336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.867369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.867702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.867735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.868104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.868135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.868479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.868512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-20 10:48:40.868853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-20 10:48:40.868886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.869235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.869268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.869653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.869684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.870045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.870076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.870411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.870442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.870790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.870821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.871105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.871137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.871531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.871563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.871920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.871952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.872327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.872359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.872712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.872745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.873087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.873119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.873246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.873280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.873675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.873706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.874068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.874103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.874505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.874537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.874919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.874954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.875295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.875329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.875562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.875596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.875963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.876337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.876370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.876717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.876748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.876964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.876994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.877352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.877386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.877728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.877759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.878015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.878048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.878401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.878432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.878647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.878678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.879055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.879085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.879353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.879668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.879699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.879913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.879943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.880184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.880217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.880575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.880836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.880867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.881205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.881237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.881582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.881615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.881949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-20 10:48:40.881979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-20 10:48:40.882255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.882287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.882659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.882692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.882908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.882938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.883193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.883226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.883549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.883580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.883810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.883840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.884204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.884238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.884613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.884644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.884892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.884924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.885269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.885302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.885535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.885567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.885938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.885968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.886196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.886227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.886596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.886627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.887003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.887036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.887297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.887329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.887671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.887703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.888079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.888109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.888334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.888366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.888708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.888739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.889101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.889500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.889533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.889845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.889883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.890249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.890280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.890634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.890664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.891015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.891046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.891371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.891409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.891768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.891801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.892155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.892210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.892546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.892576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.892964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.893187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.893220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.893590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-20 10:48:40.893623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-20 10:48:40.893953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.893984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.894341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.894374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.894610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.894640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.894981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.895011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.895292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.895326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.895656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.895687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.895903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.895933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.896312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.896343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.896669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.896699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.897040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.897072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.897454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.897821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.898101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.898131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.898537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.898570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.898915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.898947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.899190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.899223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.899631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.899663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.900005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.900038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.900257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.900288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.900520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.900550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.900913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.900944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.901213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.901245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.901602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.901633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.901959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.901989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.902371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.902403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.902742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.902773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.903129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.903172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.903548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.903579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.903824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.903858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.904080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.904112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.904472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.904505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.904873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.904903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.905273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.905305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.905661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.905692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.906047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.906078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.906311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.906343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.906598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.906627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.906982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-20 10:48:40.907013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-20 10:48:40.907385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.907416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.907777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.907808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.908068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.908099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.908471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.908503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.908861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.908891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.909265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.909297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.909640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.909673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.909919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.909950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.910169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.910202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.910570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.910600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.910885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.910916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.911292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.911323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.911676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.911706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.912073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.912104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.912466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.912498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.912844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.912874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.913119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.913150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.913525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.913555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.913898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.913928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.914309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.914348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.914704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.914735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.915093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.915123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.915507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.915539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.915630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.915657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.915899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177ee00 is same with the state(6) to be set 00:31:08.657 [2024-11-20 10:48:40.916666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.917089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.917594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.917691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.917984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.918024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.918486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.918586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.918965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.919004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.919369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.919616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.919647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.920020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.920057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.920306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.920337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.920695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.921083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.921113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.921355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.921386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-20 10:48:40.921730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-20 10:48:40.921759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.922103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.922134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.922488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.922519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.922781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.922815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.923131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.923171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.923509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.923539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.923756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.923787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.924173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.924205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.924601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.924631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.924981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.925011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.925370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.925402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.925741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.925989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.926273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.926305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.926675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.926888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.926921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.927238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.927270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.927584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.927615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.927870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.927901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.928154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.928197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.928567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.928597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.928911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.928940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.929330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.929369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.929680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.929709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.929956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.929987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.930218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.930252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.930607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.930637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.930855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.930884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.931273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.931303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.931683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.931713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.932057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.932088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.932423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.932455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.932679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.932712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.933092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.933121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.933510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.933880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.933909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.934254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.934287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.934688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-20 10:48:40.935041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-20 10:48:40.935071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.935440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.935470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.935825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.935855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.936231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.936262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.936610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.936638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.936994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.937376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.937406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.937770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.938175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.938208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.938319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.938347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.938728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.939106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.939144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.939501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.939532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.939743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.939772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.940110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.940140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.940428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.940458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.940830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.940859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.941221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.941253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.941531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.941561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.941883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.941913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.942281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.942312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.942666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.942696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.943015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.943043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.943451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.943481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.943826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.943857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.944139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.944370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.944401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.944741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.944771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.945130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.945172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.945551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.945927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.945961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.946297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.946330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.946550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.946580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.946948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.946977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.947193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.947535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.947565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.947796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.947825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-20 10:48:40.948039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-20 10:48:40.948070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.948424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.948455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.948830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.949195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.949227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.949589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.949619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.949715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.949744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.950078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.950107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.950468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.950499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.950859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.950888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.951101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.951131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.951489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.951520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.951848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.951879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.952227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.952260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.952627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.952657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.952998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.953028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.953267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.953305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.953668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.953698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.954046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.954076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.954466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.954498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.954719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.954749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.955105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.955135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.955513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.955544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.955896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.955927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.956298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.956330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.956691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.956722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.957061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.957092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.957458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.957490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.957843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.957872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.958228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.958259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.958608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.958638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.958977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.959007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.959359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.959391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.959746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.959776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.959864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.959892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-20 10:48:40.960234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-20 10:48:40.960266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.960634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.960664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.961004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.961397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.961428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.961774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.961804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.962134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.962174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.962543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.962931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.962961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.963387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.963620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.963651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.964045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.964076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.964285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.964318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.964702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.965074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.965105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.965503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.965535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.965785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.965815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.966155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.966197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.966556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.966586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.966951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.966980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.967211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.967242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.967610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.967640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.967990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.968019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.968377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.968408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.968628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.968659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.968996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.969024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.969395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.969427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.969775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.969806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.970221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.970573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.970603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.970971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.971001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.971345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.971377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.971744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.971775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.972118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.972147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.972479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.972509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.972866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.972897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.973185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.973223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.973576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.973607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.973973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.974003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-20 10:48:40.974388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-20 10:48:40.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.974659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.974688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.975059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.975421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.975451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.975824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.975855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.976262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.976471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.976499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.976841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.976870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.977271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.977301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.977669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.977699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.978049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.978079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.978323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.978355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.978728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.978758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.978905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.978936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.979195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.979227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.979601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.979631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.979995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.980025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.980246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.980278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.980615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.980645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.981014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.981044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.981444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.981475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.981706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.981735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.982059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.982089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.982453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.982485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.982837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.982872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.983236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.983267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.983626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.983655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.984027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.984057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.984399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.984430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.984786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.984817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.985176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.985208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.985580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.985610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.985827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.985858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.986197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.986229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.986603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-20 10:48:40.986634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-20 10:48:40.986986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.987015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.987319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.987350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.987706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.987736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.988095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.988124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.988495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.988527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.988764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.988796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.989005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.989411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.989442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.989757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.989788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.990132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.990169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.990410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.990440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.990670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.990704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.990938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.990970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.991219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.991251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.991594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.991626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.991962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.991991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.992223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.992255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.992612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.992641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.992997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.993028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.993404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.993436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.993803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.994195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.994227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.994472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.994503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.994887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.994916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.995335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.995690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.995721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.996075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.996106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.996469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.996500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.663 [2024-11-20 10:48:40.996724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.663 [2024-11-20 10:48:40.996756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.663 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.997172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.997206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.997644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.997988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.998018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.998385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.998417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.998768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.998798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-20 10:48:40.999183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-20 10:48:40.999418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:40.999448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:40.999791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:40.999820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.000179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.000213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.000580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.000609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.000837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.000868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.001240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.001616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.001646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.002008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.002039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.002355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.002387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.002729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.002762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.003098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.003128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.003525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.003556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.003786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.003816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.004222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.004570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.004600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.004960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.004989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.005344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.005376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.005714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.005743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.006104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.006133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.006502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.006534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.006893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.006926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.007171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.007205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.007597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.007635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.007881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.007911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.008257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.008287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.008640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.008670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.008896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.008928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.009306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.009337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.009705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.009735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.010074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.010104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.010471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.010502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.010861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.010891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.011273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.011628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.011659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.012045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.012076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.012427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.012457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.012678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.012708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-20 10:48:41.012923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-20 10:48:41.012952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.013154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.013192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.013576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.013605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.013991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.014389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.014422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.014787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.014818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.015170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.015202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.015413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.015443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.015798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.015827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.016191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.016222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.016548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.016577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.016931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.016961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.017319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.017355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.017752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.018107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.018137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.018514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.018545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.018880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.018909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.019138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.019176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.019505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.019534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.019905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.019935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.020153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.020195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.020565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.020594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.020945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.020975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.021328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.021359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.021693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.021722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.021980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.022009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.022135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.022512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.022543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.022880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.022910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.023179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.023210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.023417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.023447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.023785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.023814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.024135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.024171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.024531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.024562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.024778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.024807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-20 10:48:41.025179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-20 10:48:41.025210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.025580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.025609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.025822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.025852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.026107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.026140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.026441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.026472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.026868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.026898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.027235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.027265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.027612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.027641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.027970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.027999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.028211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.028241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.028586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.028615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.028956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.028984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.029377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.029408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.029757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.029785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.030144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.030545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.030575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.030792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.030820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.031188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.031218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.031564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.031594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.031806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.031836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.032191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.032223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.032551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.032580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.032930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.032960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.033309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.033340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.033691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.033720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.034072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.034397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.034427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.034780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.034809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.035035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.035066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.035293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.035327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.035709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.035739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.035959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.035988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.036328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.036358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.036587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.036975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.037003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.037375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-20 10:48:41.037406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-20 10:48:41.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.037778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.038142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.038180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.038396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.038426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.038793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.039181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.039211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.039600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.039629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.039977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.040348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.040380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.040716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.040744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.041099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.041135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.041392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.041422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.041759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.041788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.042149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.042188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.042557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.042588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.042799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.042828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.043173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.043204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.043467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.043497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.043851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.043880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.044234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.044265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.044602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.044632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.044884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.045236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.045266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.045629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.046015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-20 10:48:41.046045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-20 10:48:41.046287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.046316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.046539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.046568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.046925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.046954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.047286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.047316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.047531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.047559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.047886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.047915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.048273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.048304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.048646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.048676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.049006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.049034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.049253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.049283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.049676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.050029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.050060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.054605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.054714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.055007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.055045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.055370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.055404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.055750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.055781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.056135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.056176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.056560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.056927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.056957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.057226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.057264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.057680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.057710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.058101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.058358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.058389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.058730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.058759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-20 10:48:41.059122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-20 10:48:41.059152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.059522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.059552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.059651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.059681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.060011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.060042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.060412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.060443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.060791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.060820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.061200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.061231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.061568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.061596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.061949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-20 10:48:41.061979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-20 10:48:41.062330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.062361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.062662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.062693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.063056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.063086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.063402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.063433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.063795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.063824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.064052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.064082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.064454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.064491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.064848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.064878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.065122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.065506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.065538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.065867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.065896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.066246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.066277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.066483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.066512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.066728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.066757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.067109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.067138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.067488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.067519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.067734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.067764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.067958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.067988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.068338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.068368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.068692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.068722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.069069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.069440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.069470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.069702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.069733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.070094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.070123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.070464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.070494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.070831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.070861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.070956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.070987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.071344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.071374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.071711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.071741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-20 10:48:41.072106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-20 10:48:41.072135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.072507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.072538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.072874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.072903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.073290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.073322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.073686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.073717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.074077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.074106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.074522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.074789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.074819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.075165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.075194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.075551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.075913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.075943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.076292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.076323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.076614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.076643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.076995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.077024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.077330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.077361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.077704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.077733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.078073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.078101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.078457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.078490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.078852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.078883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.079222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.079252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.079592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.079978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.080007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.080332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.080364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.080725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.080755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.081129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.081168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.081524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.081923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.081953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.082310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.082340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.082711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.083077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.083107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.083318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.083349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-20 10:48:41.083576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-20 10:48:41.083606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.083955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.083985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.084267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.084299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.084520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.084549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.084905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.084934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.085334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.085731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.085760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.086095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.086124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.086517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.086548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.086903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.086932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.087285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.087316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.087675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.087705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.087919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.087948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.088298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.088329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.088679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.088714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.089098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.089127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.089348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.089380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.089737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.090083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.090113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.090486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.090518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.090714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-20 10:48:41.090743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-20 10:48:41.090998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.091028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.091365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.091396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.091782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.092138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.092174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.092631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.092982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.093407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.093438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.093781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.093811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.094145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.094185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.094471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.094500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.094774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.094804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.095038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.095279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.095310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.095642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.095672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.095882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.095910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.096277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.096308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.096656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.096685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.097036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.097066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.097411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.097441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.097670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-20 10:48:41.097699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-20 10:48:41.098061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.098097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.098463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.098493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.098721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.098755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.099111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.099141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.099467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.099497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.099858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.099886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.100239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.100271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.100610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.100639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.100989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.101018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.101256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.101289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.101645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.101674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.102027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.102056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.102272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.102302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.102529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.102559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.102910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.102941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.103316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.103345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.103659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.104025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.104054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.104270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.104303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.104691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.104721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.105054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.105083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.105466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.105496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.105839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.105869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.106065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-20 10:48:41.106094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-20 10:48:41.106453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.106483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.106722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.106755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.107168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.107198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.107422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.107794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.108137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.108175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.108514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.108543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.108918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.108948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.109184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.109219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.109566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.109596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.109794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.109822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.110150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.110190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.110405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.110434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.110774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.110803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.111437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.111466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.111817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.112047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.112077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.112420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.112451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.112802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.112832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.113147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.113186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.113536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.113566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.113913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.113942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.114290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.114320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.114552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.114582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.114815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.114845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.115064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.115092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.115445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.115475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.115825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.115855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-20 10:48:41.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-20 10:48:41.116216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.116582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.116611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.116855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.116885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.117224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.117255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.117346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.117374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.117723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.117753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.118109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.118137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.118489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.118874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.118903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.119238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.119268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.119636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.119665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.120027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.120056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.120424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.120454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.120807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.121200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.121231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.121477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.121512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.121734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-20 10:48:41.121764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-20 10:48:41.122063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.122092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.122437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.122467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.122803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.123122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.123152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.123405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.123437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.123787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.123817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.124187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.124218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.124517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.124547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.124919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.125261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.125293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.125652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.125680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.126059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.126440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.126471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-20 10:48:41.126843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-20 10:48:41.126873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.127210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.127241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.127581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.127610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.127960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.127988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.128326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.128355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.128695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.128724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.129061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.129090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.129305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.129339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.129692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.130095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.130455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.130485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.130751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.130780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.130988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.131023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.131384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.131414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.131762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.131792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.132142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.132193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.132476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.132506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.132727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.132756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.133021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.133051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.133386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.133418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.133769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.134147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.134184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.134314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.134343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.134690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.134718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-20 10:48:41.135054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-20 10:48:41.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.135422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.135453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.135804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.135834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.136110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.136139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.136555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.136586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.136953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.136983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.137379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.137409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.137747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.137776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.138169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.138199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.138551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.138580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.138916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.138945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.139308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.139338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.139690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.139719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.140053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.140082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.140407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.140438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.140790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.140824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.141181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.141211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.141592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.141622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.141956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.141985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.142328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.142723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.143116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.143375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.143406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.143652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.143682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.143822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.143849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.144216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-20 10:48:41.144247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-20 10:48:41.144334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.144360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.144717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.145068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.145097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.145469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.145500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.145870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.145900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.146245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.146276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.146631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.146659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.146963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.146993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.147369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.147400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.147761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.147790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.148122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.148152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.148498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.148529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.148908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.149257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.149287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.149668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.149697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.150071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.150101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.150447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.150478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.150703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.150733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.151034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.151063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.151281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.151311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.151659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.151688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.152027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.152059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.152396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.152426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.152732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.152761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.153118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.153147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.153493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.153888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.153917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.154289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.154321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.154662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.154692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.155045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.155074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.155403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.155435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.155658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.155687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.155905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.155937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.156292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.156323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.156729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.156758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.157092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.157122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-20 10:48:41.157463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-20 10:48:41.157494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.157716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.157745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.158093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.158123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.158476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.158507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.158863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.158893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.159115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.159144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.159541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.159571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.159916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.159945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.160283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.160315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.160675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.160703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.161041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.161069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.161512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.161542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.161892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.161922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.162283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.162322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.162689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.162719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.163071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.163100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.163458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.163488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.163722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.163751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.164085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.164114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.164348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.164379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.164606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.164640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.164978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.165014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.165376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.165408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.165744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.165774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.166020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.166374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.166404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.166777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.166807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.167172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.167202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.167560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.167590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.167953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.167983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.168320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.168351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.168727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.169076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.169105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.169493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.169523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.169737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.169767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.170121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.170152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.170442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.170476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.170800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.170829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.171073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.171102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.171472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.171504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.171746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.171776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.172110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.172138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-20 10:48:41.172375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-20 10:48:41.172405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.172773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.172802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.173037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.173067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.173409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.173440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.173704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.174083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.174494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.174846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.174876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.175097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.175130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.175489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.175519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.175738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.175768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.176116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.176146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.176570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.176942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.177179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.177209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.177531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.177560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.177905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.177933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.178154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.178206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.178548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.178577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.178802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.178831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.179181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.179212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.179566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.179595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.179844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.180176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.180206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.180563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.180592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.180933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.180962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.181211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.181240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.181575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.181605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.181966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.181995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.182342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-20 10:48:41.182373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-20 10:48:41.182721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.182751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.183081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.183351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.183382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.183725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.183753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.184105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.184135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.184485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.184516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.184871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.184900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.185215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.185454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.185486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.185870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.185899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.186237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.186269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.186579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.186608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.186951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.186980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.187319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.187349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.187555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.187584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.187954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.188175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.188205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.188523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.188915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.188943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.189177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.189208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.189429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.189459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.189832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.190190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.190222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.190450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.190480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.190822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.190851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.191108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.191136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.191507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.191538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.191885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.191914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.192312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.192343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.192697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.192727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.193101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.193130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.193468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.193498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.193702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.193732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.194034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.194063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.194305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.194335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.194691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.194720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.195077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-20 10:48:41.195107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-20 10:48:41.195452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.195482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.195643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.195673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.196021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.196050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.196421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.196452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.196802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.196831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.197056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.197086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.197431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.197461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.197819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.197854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.198223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.198254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.198460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.198488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.198728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.198757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.198994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.199024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.199236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.199266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.199592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.199621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.199995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.200226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.200256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.200576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.200605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.200942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.200972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.201232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.201263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.201625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.201655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.201999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.202028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.202365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.202396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.202733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.202761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.203101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.203131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.203470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.203501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.203840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.203869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.204120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.204149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.204523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.204553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.204893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.204924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.205275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.205307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.205654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.205683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.206077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.206443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.206473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.206834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.206863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.207225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.207260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.207573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.207604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.207883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.207912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.208134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.208496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.208525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.208902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.208931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-20 10:48:41.209268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-20 10:48:41.209298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.209645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.209675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.209901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.209930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.210210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.210240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.210554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.210583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.210784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.210813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.211167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.211196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.211574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.211928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.211958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.212306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.212337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.212725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.212756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.213099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.213127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.213512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.213542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.213911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.213941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.214301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.214332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.214693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.214722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.215060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.215089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.215410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.215440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.215659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.215688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.215934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.216102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.216131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.216362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.216397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.216627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.216656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.216992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.217021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.217375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.217406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.217778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.217810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.218148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.218188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.218474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.218503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.218737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.219096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.219125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.219511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.219542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.219889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.219920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.220169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.220199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.220569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.220598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.220792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.220822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.221193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.221224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.221600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.221629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.221977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.222007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.222353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.222382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.222731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.222760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-20 10:48:41.222993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-20 10:48:41.223022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.223229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.223260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.223468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.223498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.223866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.223895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.224247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.224276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.224625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.224654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.224976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.225005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.225226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.225256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.225603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.225632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.225984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.226014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.226242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.226274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.226498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.226527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.226887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.226917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.227301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.227331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.227421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.227791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.227819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.228067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.228100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.228323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.228353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.228589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.228917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.228945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.229322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.229353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.229705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.229735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.230084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.230113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.230477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.230507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.230861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.230891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.231284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.231636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.231665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.231876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.231906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.232267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.232297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.232672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.233027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.233056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.233388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.233419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.233621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.234011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.234041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.234406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.234436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.234649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-20 10:48:41.234678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-20 10:48:41.235016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.235046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.235259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.235289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.235536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.235566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.235905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.235934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.236140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.236178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.236469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.236499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.236587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.236614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.236923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.236952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.237219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.237618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.237648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.237872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.237900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.238288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.238318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.238526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.238555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.238897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.239307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.239338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.239676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.240069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.240097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.240334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.240364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.240717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.963 [2024-11-20 10:48:41.240746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.963 qpair failed and we were unable to recover it. 00:31:08.963 [2024-11-20 10:48:41.240977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.241010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.241235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.241266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.241629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.241659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.242006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.242035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.242393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.242424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.242771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.242801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.243155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.243193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.243540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.243570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.243825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.243854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.244129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.244166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.244495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.244524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.244737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.244766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.244967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.244996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.245303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.245333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.245669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.245697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.246054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.246084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.246339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.246677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.246706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.247058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.247087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.247294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.247325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.247559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.247591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.247937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.247972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.248206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.248238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.248599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.248627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.248858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.249196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.249227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.249559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.249588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.249917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.249946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.250297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.250328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.250534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.250563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.250913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.250943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.251280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.251310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.251659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.251689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.251885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.251914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.252270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.252301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.252655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.252684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.964 [2024-11-20 10:48:41.252883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.964 [2024-11-20 10:48:41.252912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.964 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.253268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.253298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.253645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.253675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.254014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.254043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.254391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.254420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.254779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.254809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.255129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.255166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.255496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.255525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.255885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.255914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.256260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.256291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.256656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.256685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.257036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.257064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.257403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.257434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.257784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.257813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.258038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.258069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.258400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.258431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.258762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.258791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.259172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.259203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.259513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.259543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.259891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.259920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.260277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.260308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.260649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.260679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.261016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.261044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.261404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.261435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.261792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.261822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.262175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.262553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.262583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.262954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.263251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.263611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.263640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.263966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.263995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.264377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.264407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.264603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.264632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.264982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.265011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.265320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.265351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.265710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.265739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.965 qpair failed and we were unable to recover it. 00:31:08.965 [2024-11-20 10:48:41.265945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.965 [2024-11-20 10:48:41.265974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.266337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.266367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.266705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.266735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.267075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.267104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.267322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.267352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.267726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.268022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.268050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.268407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.268437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.268785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.268815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.269223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.269253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.269472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.269501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.269872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.269901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.270252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.270282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.270634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.270664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.271033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.271062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.271416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.271446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.271799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.271828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.272176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.272211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.272508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.272537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.272885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.272914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.273262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.273293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.273597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.273627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.273960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.273989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.274187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.274217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.274573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.274941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.274970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.275233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.275544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.275573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.275924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.275952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.276302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.276332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.276683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.276712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.276949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.276979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.277318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.277349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.277547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.277577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.277878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.277906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.278262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.278292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.278669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.966 [2024-11-20 10:48:41.278697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.966 qpair failed and we were unable to recover it. 00:31:08.966 [2024-11-20 10:48:41.278927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.279329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.279359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.279576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.279952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.279981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.280327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.280357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.280658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.280691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.281024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.281053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.281528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.281558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.281784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.281812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.282174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.282204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.282512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.282541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.282757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.282785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.283138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.283174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.283397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.283427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.283756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.283786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.283955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.283985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.284356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.284386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.284737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.284765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.285122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.285150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.285507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.285537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.285895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.286302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.286333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.286677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.286706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.287056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.287085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.287447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.287477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.287828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.287857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.288207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.288236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.288602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.288631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.289000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.289028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.289397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.289427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.289818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.289848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.290210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.290240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.290566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.290935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.290970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.291355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.291386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.291714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.291743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.291980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.292009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.292402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.292625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.292654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.293004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.293032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.967 [2024-11-20 10:48:41.293380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.967 [2024-11-20 10:48:41.293411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.967 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.293753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.293782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.293981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.294010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.294366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.294397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.294699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.294728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.295079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.295109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.295329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.295359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.295719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.295749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.296081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.296111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.296462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.296491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.296838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.296867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.297211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.297242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:08.968 [2024-11-20 10:48:41.297566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.968 [2024-11-20 10:48:41.297595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:08.968 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.297950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.297981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.298313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.298346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.298437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.298465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.298807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.298834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.298943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.298976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.299313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.299343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.299692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.299722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.300067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.300096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.300454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.300485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.300834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.300864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.301233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.301263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.301611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.301640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.301991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.302020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.302377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.302408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.302758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.302787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.303039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.303174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.303552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.303581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.303938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.303967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.304330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.304360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.304569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.304598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.304940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.304969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.305329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.305361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.305710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.305739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.305967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.306001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.306263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.306294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.306600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.306629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.306844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.306873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.307229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.307259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.307507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.245 [2024-11-20 10:48:41.307535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.245 qpair failed and we were unable to recover it. 00:31:09.245 [2024-11-20 10:48:41.307887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.307916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.308250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.308280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.308628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.308657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.308746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.308775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.309022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.309051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.309416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.309447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.309778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.309807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.310188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.310220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.310564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.310593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.310958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.310987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.311338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.311369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.311579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.311607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.311950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.311979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.312328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.312359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.312590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.312619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.312841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.312870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.313226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.313256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.313552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.313581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.313925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.313966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.314187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.314219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.314559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.314589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.314884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.314914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.315274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.315305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.315668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.315696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.316080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.316109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.316455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.316485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.316710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.316741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.316964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.316993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.246 [2024-11-20 10:48:41.317228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.246 [2024-11-20 10:48:41.317258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.246 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.317381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.317409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.317619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.317648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.318262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.318292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.318616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.318645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.318983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.319013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.319250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.319281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.319621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.319650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.319990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.320020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.320376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.320757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.320785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.321189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.321636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.321666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.321873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.321902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.322249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.322280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.322628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.322656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.323004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.323039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.323394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.323425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.323782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.323811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.324036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.324064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.324252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.324480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.324509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.324855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.324883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.325101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.325131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.325512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.325541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.325866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.326095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.326123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.326490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.326520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.326872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.326901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.327234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.327265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.327602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.327632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.327863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.327896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.328139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.328178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.328395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.328424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.328753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.328782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.329157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.329194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.329540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.329569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.247 [2024-11-20 10:48:41.329911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.247 [2024-11-20 10:48:41.329940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.247 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.330318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.330349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.330720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.330750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.331094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.331122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.331508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.331539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.331895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.331925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.332276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.332307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.332669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.332699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.333073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.333102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.333233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.333263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.333580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.333609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.333947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.333975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.334316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.334346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.334711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.334741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.335112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.335141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.335365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.335396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.335719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.335749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.336106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.336134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.336508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.336538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.336889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.337275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.337621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.337650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.338032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.338267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.338297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.338613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.338644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.338961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.338991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.339391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.339423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.339754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.339782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.340118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.340148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.248 [2024-11-20 10:48:41.340499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.248 [2024-11-20 10:48:41.340530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.248 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.340895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.340924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.341278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.341542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.341572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.341883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.341911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.342226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.342257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.342669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.342699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.343043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.343073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.343409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.343440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.343785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.343814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.344028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.344061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.344403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.344432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.344816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.345172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.345202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.345532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.345561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.345779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.345808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.346143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.346194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.346575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.346604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.346949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.346984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.347318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.347349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.347543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.347573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.347950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.347979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.348320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.348350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.348700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.349077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.349340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.349372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.349584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.349612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.349785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.349817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.350119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.350499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.350528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.350734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.350762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.350953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.350982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.351248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.351279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.351634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.351664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.351916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.351944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.352308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.352338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.352546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.249 [2024-11-20 10:48:41.352911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.249 [2024-11-20 10:48:41.352939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.249 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.353291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.353322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.353684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.353713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.353937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.353966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.354285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.354315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.354665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.354694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.355042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.355071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.355404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.355434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.355659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.355694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.356028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.356058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.356287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.356321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.356673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.356703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.357052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.357404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.357433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.357670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.357896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.357925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.358194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.358446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.358474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.358666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.358696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.359047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.359076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.359346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.359376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.359593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.359622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.359831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.359860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.360231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.360588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.360617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.360952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.360981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.361246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.361276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.361593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.250 [2024-11-20 10:48:41.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.250 qpair failed and we were unable to recover it. 00:31:09.250 [2024-11-20 10:48:41.361966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.361996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.362220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.362249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.362511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.362899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.362929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.363133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.363570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.363920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.363948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.364072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.364598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.364690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.365122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.365178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.365622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.366190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.366604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.366694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.366969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.367007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.367410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.367501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.367811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.367848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.368228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.368507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.368876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.368905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.369231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.369262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.369626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.369895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.369925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.370199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.370514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.370544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.370882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.370911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.371271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.371303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.371660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.371690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.372028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.372058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.372386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.372418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.372756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.372786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.373021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.373052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.373401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.251 [2024-11-20 10:48:41.373432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.251 qpair failed and we were unable to recover it. 00:31:09.251 [2024-11-20 10:48:41.373760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.373791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.373881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.373909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.374224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.374262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.374610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.374639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.375007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.375036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.375380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.375630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.375659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.375899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.375928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.376176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.376207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.376459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.376494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.376728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.376985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.377016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.377128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.377166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.377573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.377604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.377842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.377872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.378244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.378274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.378624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.378654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.378964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.378993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.379198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.379229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.379481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.379514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.379873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.379903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.380018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.380050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.380416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.380837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.381181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.381212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.381570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.381599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.381940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.381970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.382354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.382385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.382592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.382621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.382983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.383014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.383215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.383246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.383574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.383603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.384008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.384265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.384538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.384567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.384926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.252 [2024-11-20 10:48:41.384955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.252 qpair failed and we were unable to recover it. 00:31:09.252 [2024-11-20 10:48:41.385318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.385349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.385698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.385727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.385941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.385971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.386168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.386199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.386571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.386794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.386823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.387150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.387213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.387543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.387573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.387782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.387812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.388152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.388652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.388881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.388910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.389290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.389321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.389667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.389696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.389917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.389945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.390286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.390556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.390585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.390927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.390955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.391312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.391344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.391548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.391578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.391843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.391872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.392074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.392104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.392410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.392738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.392767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.392978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.393008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.393267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.393303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.393519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.393548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.393765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.393794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.393983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.394014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.394244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.394274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.394591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.394620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.394939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.394969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.395310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.395340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.395691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.395721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.396100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.396129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.396224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.396255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2388000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.396788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.396894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.397483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.253 [2024-11-20 10:48:41.397574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.253 qpair failed and we were unable to recover it. 00:31:09.253 [2024-11-20 10:48:41.397831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.397868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.398397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.398488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.398901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.398939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.399205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.399621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.399653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.399984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.400013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.400380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.400411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.400755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.400784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.401130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.401184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.401424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.401453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.401714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.401749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.401993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.402023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.402244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.402276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.402488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.402518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.402871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.402899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.403281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.403313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.403535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.403565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.403920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.403950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.404309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.404339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.404670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.404700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.405089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.405118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.405479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.405509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.405867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.405897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.406265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.406295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.406643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.406672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.407041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.407071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.407422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.407453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.407784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.407813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.408169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.408201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.408431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.408459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.408814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.408843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.409232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.409586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.409614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.409846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.409875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.410143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.410187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.410579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.410609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.410954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.410983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.254 [2024-11-20 10:48:41.411180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.254 [2024-11-20 10:48:41.411212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.254 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.411519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.411548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.411881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.411910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.412173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.412202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.412548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.412578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.412787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.412815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.413196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.413226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.413446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.413475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.413893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.413922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.414175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.414205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.414295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.414323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.414640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.414674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.414991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.415021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.415381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.415413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.415679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.416018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.416047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.416376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.416406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.416757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.416786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.417137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.417174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.417494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.417523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.417837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.417867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.418058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.418087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.418279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.418309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.418630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.418993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.419023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.419239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.419271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.419591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.419622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.419908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.420291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.420322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.420690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.420718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.420942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.420971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.421322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.421352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.421693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.255 [2024-11-20 10:48:41.421722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.255 qpair failed and we were unable to recover it. 00:31:09.255 [2024-11-20 10:48:41.421986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.422014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.422364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.422395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.422741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.422771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.422997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.423027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.423349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.423380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.423726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.423756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.424146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.424186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.424571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.424902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.424930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.425016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.425044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.425383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.425412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.425761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.425791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.426027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.426056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.426300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.426330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.426678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.426708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.427083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.427381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.427413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.427768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.427796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.428119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.428419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.428449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.428759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.428788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.429130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.429168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.429550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.429579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.429935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.429964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.430320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.430704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.430732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.431073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.431101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.431343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.431375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.431728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.431757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.432130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.432169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.432518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.432855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.432885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.433104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.433133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.433544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.433574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.433922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.433951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.434344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.434375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.434708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.434738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.256 [2024-11-20 10:48:41.435096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.256 [2024-11-20 10:48:41.435125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.256 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.435224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.435254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.435589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.435618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.435850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.435879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.436219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.436250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.436498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.436528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.436902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.436931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.437298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.437328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.437571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.437602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.437797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.437826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.438183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.438214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.438563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.438593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.438953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.438982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.439186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.439217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.439576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.439606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.439829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.439858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.440199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.440230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.440542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.440571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.440941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.440970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.441317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.441347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.441569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.441920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.441955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.442280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.442311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.442669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.442699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.443039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.443068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.443329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.443670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.443700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.443910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.443939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.444296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.444326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.444695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.445054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.445083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.445304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.445339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.445684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.445713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.446079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.446108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.446471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.446502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.446859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.446889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.447083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.447112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.447492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.447524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.257 [2024-11-20 10:48:41.447873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.257 [2024-11-20 10:48:41.447903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.257 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.448236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.448267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.448646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.448676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.448918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.448948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.449287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.449318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.449664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.449693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.450014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.450043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.450346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.450376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.450722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.450752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.451142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.451496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.451528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.451891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.451920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.452281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.452313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.452736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.452765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.453099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.453372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.453403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.453752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.453782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.454115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.454144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.454514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.454866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.454895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.455270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.455301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.455643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.455671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.456028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.456056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.456411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.456458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.456795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.457069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.457266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.457297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.457665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.457694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.458039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.458068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.458440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.458470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.458698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.458727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.459050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.459079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.459453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.459484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.459827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.460197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.460228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.460579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.460958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.460988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.258 [2024-11-20 10:48:41.461344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.258 [2024-11-20 10:48:41.461373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.258 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.461583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.461612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.461807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.461837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.462180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.462209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.462583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.462612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.462959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.463329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.463359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.463703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.463733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.464089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.464118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.464497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.464527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.464736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.464765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.464999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.465028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.465375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.465406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.465762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.465793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.466140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.466175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.466511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.466541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.466761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.466794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.467138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.467189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.467547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.467576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.467937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.467967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.468347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.468695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.468724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.469101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.469132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.469437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.469467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.469879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.470214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.470245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.470590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.470627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.471011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.471040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.471470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.471500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.471842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.471872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.472100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.472133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.472476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.472506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.472856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.472885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.259 qpair failed and we were unable to recover it. 00:31:09.259 [2024-11-20 10:48:41.473224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.259 [2024-11-20 10:48:41.473255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.473447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.473868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.474244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.474274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.474630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.474659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.475009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.475039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.475248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.475278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.475523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.475787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.475816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.476153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.476190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.476541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.476571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.476802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.476831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.477061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.477093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.477329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.477361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.477716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.477745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.478069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.478098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.478457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.478488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.478826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.478855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.479240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.479600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.479628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.479970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.480000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.480374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.480405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.480754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.480784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.481037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.481066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.481431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.481461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.481807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.481836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.482033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.482062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.482306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.482337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.482534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.482564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.482891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.482921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.483053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.483081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.483413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.483442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.483664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.483693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.483919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.483953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.484238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.484434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.484463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.484821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.484850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.485201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.260 [2024-11-20 10:48:41.485231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.260 qpair failed and we were unable to recover it. 00:31:09.260 [2024-11-20 10:48:41.485562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.485592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.485926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.485955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.486279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.486312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.486656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.486685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.487034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.487063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.487324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.487748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.487777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.487863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.487891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.488186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.488215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.488562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.488592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.488927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.488956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.489181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.489212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.489526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.489554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.489906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.489935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.490304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.490335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.490684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.490713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.491069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.491098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.491463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.491493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.491859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.491887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.492145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.492440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.492662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.492691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.493089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.493119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.493461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.493490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.493824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.493853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.494206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.494236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.494564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.494592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.494834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.494862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.495196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.495226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.495419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.495447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.495804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.495833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.496200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.496230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.496488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.496517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.496901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.496930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.497248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.497280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.497520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.497554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.497880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.497909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.498097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-11-20 10:48:41.498126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.261 qpair failed and we were unable to recover it. 00:31:09.261 [2024-11-20 10:48:41.498225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.498253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.498467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.498496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.498729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.498758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.499085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.499113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.499335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.499366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.499730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.499759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.500115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.500495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.500737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.500765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.501152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.501476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.501506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.501882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.501911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.502190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.502221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.502487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.502516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.502869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.502897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.503223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.503609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.503638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.503973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.504001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.504374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.504405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.504742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.504771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.504989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.505018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.505341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.505370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.505593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.505626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.505873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.505902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.506254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.506285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.506521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.506550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.506785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.506814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.507145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.507381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.507410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.507759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.507788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.508124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.508152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.508360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.508747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.508776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.509004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.509033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.509405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.509435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.509745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.509773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.510120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.510149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.262 [2024-11-20 10:48:41.510373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.262 [2024-11-20 10:48:41.510408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.262 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.510621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.510650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.510999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.511027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.511312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.511343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.511703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.511731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.512103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.512132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.512432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.512656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.512684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.512891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.512920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.513250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.513280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.513620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.513649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.513887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.513916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.514139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.514174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.514284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.514313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.514674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.514703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.514945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.514974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.515411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.515441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.515785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.515814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.516152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.516199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.516406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.516435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.516741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.516771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.517120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.517149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.517503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.517532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.517903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.518292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.518635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.518663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.519000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.519029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.519407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.519439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.519792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.519822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.520034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.520062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.520413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.520442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.520683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.521052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.521082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.521356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.521386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.263 [2024-11-20 10:48:41.521596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:09.263 [2024-11-20 10:48:41.521937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.521966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 [2024-11-20 10:48:41.522053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.522080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.263 qpair failed and we were unable to recover it. 00:31:09.263 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:09.263 [2024-11-20 10:48:41.522328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.263 [2024-11-20 10:48:41.522357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.264 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.264 [2024-11-20 10:48:41.522707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.522746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.523074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.523103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.523325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.523355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.523735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.524068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.524097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.524486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.524517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.524721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.524750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.525100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.525129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.525582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.525612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.525961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.525991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.526365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.526396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.526739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.526768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.527167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.527199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.527565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.527794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.527823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.528052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.528081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.528446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.528476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.528698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.528727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.528924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.528953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.529333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.529364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.529717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.529747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.530085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.530114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.530465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.530495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.530822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.530851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.531208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.531572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.531602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.531978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.532008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.532442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.532697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.532726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.533116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.533370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.533402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.533748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.264 [2024-11-20 10:48:41.533777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.264 qpair failed and we were unable to recover it. 00:31:09.264 [2024-11-20 10:48:41.534150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.534545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.534575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.534798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.534827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.534937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.534969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.535299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.535328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.535669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.535701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.536037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.536067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.536774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.536810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.537142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.537181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.537539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.537568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.537780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.537809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.538176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.538206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.538588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.538618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.538816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.538845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.539190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.539220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.539474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.539850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.539879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.540230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.540262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.540598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.540931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.540960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.541311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.541343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.541708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.541738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.541955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.541984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.542358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.542388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.542643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.542672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.543009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.543038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.543153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2384000b90 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.543736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.543829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.544259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.544303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.544506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.544537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.544728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.544758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.545177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.545517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.545546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.545852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.545881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.546371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.546474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.546843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.546881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.265 [2024-11-20 10:48:41.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.265 [2024-11-20 10:48:41.547277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.265 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.547663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.547694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.548039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.548070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.548300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.548675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.548705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.549071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.549100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.549483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.549516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.549738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.549767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.550135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.550172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.550563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.550593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.550931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.551221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.551251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.551613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.551643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.551852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.551881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.552228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.552259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.552617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.552647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.552980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.553009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.553208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.553238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.553466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.553495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.553734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.553768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.554128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.554157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.554466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.554497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.554843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.554874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.555207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.555238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.555548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.555580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.555911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.555948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.556298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.556331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.556684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.556715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.556984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.557013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.557411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.557728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.557756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.558107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.558138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.558397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.558432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.558737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.558766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.559123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.559375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.559405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.559731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.559761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.560094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.560123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.560463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.266 [2024-11-20 10:48:41.560495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.266 qpair failed and we were unable to recover it. 00:31:09.266 [2024-11-20 10:48:41.560841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.267 [2024-11-20 10:48:41.560871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.561239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.267 [2024-11-20 10:48:41.561620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.561650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.267 [2024-11-20 10:48:41.562006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.562035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.562401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.562431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.562662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.562696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.563029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.563057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.563420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.563450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.563672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.563702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.563897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.564270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.564300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.564521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.564556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.564882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.564910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.565277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.565308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.565674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.565702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.566038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.566066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.566414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.566444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.566791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.566819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.567192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.567222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.567527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.567556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.567754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.567783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.568137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.568188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.568497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.568881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.569132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.569170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.569507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.569537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.569871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.569900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.570121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.570149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.570508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.570537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.570893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.570922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.571297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.571327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.571674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.571704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.572037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.572066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.572417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.572448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.572794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.572823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.573188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.573217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.267 qpair failed and we were unable to recover it. 00:31:09.267 [2024-11-20 10:48:41.573559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.267 [2024-11-20 10:48:41.573588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.573831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.573860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.574237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.574266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.574645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.574994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.575023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.575375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.575405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.575778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.575807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.576144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.576191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.576502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.576531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.576877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.576907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.577273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.577309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.577665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.577694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.578050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.578079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.578291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.578320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.578655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.578684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.578809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.579149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.579190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.579538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.579567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.579901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.579930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.580278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.580308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.580533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.580562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.580786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.581170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.581199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.581554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.581583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.581888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.581918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.582291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.582320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.582691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.582719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.583119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.583372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.583401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.583760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.583788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.584030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.584060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.584368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.584400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.584761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.584790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.585046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.585076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.585477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.585506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.585734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.585767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.586141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.586179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.268 [2024-11-20 10:48:41.586540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.268 [2024-11-20 10:48:41.586570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.268 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.586905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.586933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.587310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.587339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.587671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.587700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.587928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.587957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.588294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.588325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.588662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.588698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.589041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.589070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.589411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.589440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.589697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.590082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.590450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.590480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.590732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.591096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.591125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.591468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.591498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.591862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.591892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.592238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.592268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.592646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.592953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.592982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.593337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.593367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.593722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.594098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.594128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.594395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.594428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.594758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.594787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.595024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.595053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.595325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.595354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.595673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 Malloc0 00:31:09.269 [2024-11-20 10:48:41.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.596042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.596071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.596447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.269 [2024-11-20 10:48:41.596479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.596827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.269 [2024-11-20 10:48:41.596856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.269 [2024-11-20 10:48:41.597210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.597240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.269 [2024-11-20 10:48:41.597473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.597502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.597748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.597777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.269 [2024-11-20 10:48:41.598137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.269 [2024-11-20 10:48:41.598173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.269 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.598501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.598530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.598770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.598799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.599031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.599061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.599278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.599309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.599670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.599700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.599911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.599940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.600303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.600334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.600682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.600711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.601060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.601088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.601465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.601495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.601720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.601748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.602120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.602149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.602575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.602956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.602990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.603193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.270 [2024-11-20 10:48:41.603334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.603364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.603636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.603983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.604327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.604358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.270 [2024-11-20 10:48:41.604612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.270 [2024-11-20 10:48:41.604645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.270 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.604881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.605243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.605275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.605614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.605642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.605997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.606025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.606241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.606271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.606646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.606682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.607011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.607397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.607775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.607803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.608006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.608035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.608378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.608409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.608635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.608663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.535 [2024-11-20 10:48:41.609012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.609041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.535 [2024-11-20 10:48:41.609396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.609426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.535 [2024-11-20 10:48:41.609647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.609675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.535 [2024-11-20 10:48:41.610013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.610041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.610401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.610431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.610646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.610678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.611019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.611048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.611429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.611460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.611811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.611840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.612197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.612229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.612456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.535 [2024-11-20 10:48:41.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.535 qpair failed and we were unable to recover it. 00:31:09.535 [2024-11-20 10:48:41.612711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.612739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.613077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.613106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.613365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.613396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.613729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.613758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.614111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.614140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.614479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.614508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.614869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.614897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.615168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.615198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.615587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.615616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.615945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.615974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.616330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.616361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.616690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.616718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.617067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.617322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.617352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.617652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.617680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.618014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.618042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.618516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.618544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.618780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.618808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.619168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.619198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.619441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.619469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.619816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.619846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.620202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.620233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.620573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.620602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.536 [2024-11-20 10:48:41.620816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.620847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.536 [2024-11-20 10:48:41.621181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.621212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.536 [2024-11-20 10:48:41.621557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.621585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.621673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.536 [2024-11-20 10:48:41.621701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.622033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.622061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.622255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.622284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.622633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.622661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.623050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.623416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.623446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.623806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.623834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.624064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.624092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.624412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.624442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.536 [2024-11-20 10:48:41.624672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.536 [2024-11-20 10:48:41.624699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.536 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.624996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.625025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.625378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.625408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.625652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.625684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.626023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.626052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.626403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.626432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.626787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.626816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.627194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.627225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.627615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.627645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.627855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.627884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.628238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.628281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.628588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.628617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.628991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.629019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.629362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.629392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.629625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.629969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.629998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.630359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.630726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.630754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.631110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.631139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.631404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.631434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.631668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.631697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.631904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.631932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.632289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.632322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.632674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.632703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.537 [2024-11-20 10:48:41.632949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.632982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.633202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.633233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.633538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.537 [2024-11-20 10:48:41.633880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.633909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.634169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.634202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.634424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.634452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.634806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.634835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.635188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.635218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.635415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.635444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.635723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.635751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.636036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.636065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.636408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.636439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.636770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.537 [2024-11-20 10:48:41.636800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.537 qpair failed and we were unable to recover it. 00:31:09.537 [2024-11-20 10:48:41.637143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.637180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.637537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.637567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.637901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.637930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.638183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.638212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.638450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.638482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.638807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.638835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.639207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.639237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.639589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.639617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.639827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.538 [2024-11-20 10:48:41.639856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17890c0 with addr=10.0.0.2, port=4420 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.639988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.538 [2024-11-20 10:48:41.643892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.644021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.644068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.644091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.644114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.644181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.538 [2024-11-20 10:48:41.653818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.653902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.653940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.653962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.653981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.654023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.538 10:48:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2244346 00:31:09.538 [2024-11-20 10:48:41.663737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.663831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.663857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.663872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.663885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.663912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.673912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.673984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.674002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.674013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.674023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.674042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.683707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.683765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.683784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.683792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.683798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.683814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.693840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.693885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.693900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.693908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.693914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.693929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.703862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.703912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.703925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.703933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.703939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.703953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.713925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.713978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.713991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.713999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.714005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.714018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.723943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.538 [2024-11-20 10:48:41.723996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.538 [2024-11-20 10:48:41.724009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.538 [2024-11-20 10:48:41.724016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.538 [2024-11-20 10:48:41.724026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.538 [2024-11-20 10:48:41.724040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.538 qpair failed and we were unable to recover it. 00:31:09.538 [2024-11-20 10:48:41.733824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.733873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.733887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.733894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.733900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.733914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.743981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.744029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.744042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.744050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.744056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.744070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.754034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.754088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.754101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.754108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.754115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.754128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.764034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.764084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.764097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.764105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.764111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.764125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.774038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.774086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.774100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.774108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.774115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.774128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.784036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.784082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.784095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.784102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.784109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.784123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.794191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.794282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.794296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.794304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.794311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.794326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.804122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.804178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.804192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.804199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.804206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.804220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.814021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.814072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.814088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.814096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.814102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.814116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.824177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.824267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.824281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.824289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.824296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.824310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.834231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.539 [2024-11-20 10:48:41.834284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.539 [2024-11-20 10:48:41.834296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.539 [2024-11-20 10:48:41.834304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.539 [2024-11-20 10:48:41.834310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.539 [2024-11-20 10:48:41.834324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.539 qpair failed and we were unable to recover it. 00:31:09.539 [2024-11-20 10:48:41.844244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.844298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.844311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.844318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.844325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.844338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.854225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.854287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.854300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.854307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.854317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.854331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.864365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.864422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.864436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.864445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.864452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.864466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.874363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.874417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.874431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.874438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.874444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.874459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.884378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.884432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.884445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.884453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.884460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.884474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.894401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.894451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.894464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.894471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.894478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.894491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.540 [2024-11-20 10:48:41.904408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.540 [2024-11-20 10:48:41.904458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.540 [2024-11-20 10:48:41.904471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.540 [2024-11-20 10:48:41.904479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.540 [2024-11-20 10:48:41.904485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.540 [2024-11-20 10:48:41.904499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.540 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.914466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.914521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.914534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.914543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.914551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.803 [2024-11-20 10:48:41.914564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.803 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.924502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.924596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.924609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.924616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.924623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.803 [2024-11-20 10:48:41.924637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.803 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.934487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.934571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.934584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.934591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.934598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.803 [2024-11-20 10:48:41.934612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.803 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.944501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.944560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.944577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.944585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.944591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.803 [2024-11-20 10:48:41.944605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.803 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.954596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.954662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.954675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.954683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.954689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.803 [2024-11-20 10:48:41.954703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.803 qpair failed and we were unable to recover it. 00:31:09.803 [2024-11-20 10:48:41.964575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.803 [2024-11-20 10:48:41.964626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.803 [2024-11-20 10:48:41.964639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.803 [2024-11-20 10:48:41.964646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.803 [2024-11-20 10:48:41.964653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:41.964666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:41.974597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:41.974687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:41.974701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:41.974708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:41.974715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:41.974729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:41.984494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:41.984544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:41.984557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:41.984564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:41.984573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:41.984587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:41.994707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:41.994759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:41.994773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:41.994780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:41.994787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:41.994801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.004697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.004747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.004760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.004767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.004774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.004787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.014773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.014826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.014839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.014846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.014853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.014867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.024730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.024782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.024795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.024802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.024809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.024822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.034783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.034838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.034851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.034859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.034865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.034879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.044720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.044766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.044781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.044788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.044795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.044810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.054865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.054912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.054925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.054933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.054939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.054953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.064843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.064893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.064907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.064914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.064921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.064935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.074916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.075002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.075018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.075026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.075033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.075047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.084899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.084980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.085005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.085014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.085021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.804 [2024-11-20 10:48:42.085041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.804 qpair failed and we were unable to recover it. 00:31:09.804 [2024-11-20 10:48:42.094923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.804 [2024-11-20 10:48:42.094976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.804 [2024-11-20 10:48:42.095001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.804 [2024-11-20 10:48:42.095010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.804 [2024-11-20 10:48:42.095017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.095036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.104953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.105033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.105048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.105056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.105063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.105078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.115006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.115073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.115086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.115094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.115104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.115120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.125011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.125067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.125080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.125088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.125094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.125108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.134991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.135044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.135057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.135065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.135071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.135085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.145054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.145109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.145123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.145130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.145137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.145151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.155108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.155210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.155224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.155231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.155238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.155252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.165128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.165178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.165192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.165200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.165206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.165220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:09.805 [2024-11-20 10:48:42.175128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.805 [2024-11-20 10:48:42.175188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.805 [2024-11-20 10:48:42.175202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.805 [2024-11-20 10:48:42.175209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.805 [2024-11-20 10:48:42.175216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:09.805 [2024-11-20 10:48:42.175231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.805 qpair failed and we were unable to recover it. 00:31:10.067 [2024-11-20 10:48:42.185141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.067 [2024-11-20 10:48:42.185191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.067 [2024-11-20 10:48:42.185204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.067 [2024-11-20 10:48:42.185212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.067 [2024-11-20 10:48:42.185219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.185234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.195242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.195294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.195307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.195315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.195322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.195336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.205107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.205162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.205184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.205191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.205198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.205214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.215259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.215305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.215319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.215327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.215334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.215348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.225254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.225319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.225332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.225339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.225346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.225360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.235324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.235377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.235391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.235398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.235405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.235418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.245331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.245386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.245400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.245407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.245418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.245432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.255292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.255340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.255354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.255361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.255368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.255381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.265385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.265435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.265448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.265456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.265462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.265476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.275436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.275488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.275501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.275509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.275516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.275530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.285460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.285510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.285523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.285531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.285538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.285551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.295446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.295491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.295505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.295512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.295519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.295533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.305535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.305614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.305627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.305634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.305642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.068 [2024-11-20 10:48:42.305655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.068 qpair failed and we were unable to recover it. 00:31:10.068 [2024-11-20 10:48:42.315445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.068 [2024-11-20 10:48:42.315544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.068 [2024-11-20 10:48:42.315557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.068 [2024-11-20 10:48:42.315565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.068 [2024-11-20 10:48:42.315571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.315585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.325572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.325620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.325633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.325640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.325647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.325661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.335585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.335632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.335649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.335657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.335663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.335677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.345600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.345648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.345662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.345669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.345676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.345690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.355554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.355618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.355632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.355639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.355646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.355660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.365644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.365706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.365720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.365727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.365734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.365748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.375682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.375733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.375746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.375754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.375764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.375778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.385712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.385761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.385775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.385782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.385789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.385803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.395777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.395847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.395861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.395868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.395875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.395889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.405771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.405821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.405835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.405842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.405849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.405863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.415788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.415840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.415855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.415862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.415869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.415883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.425809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.425862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.425876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.425883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.425890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.425904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.069 [2024-11-20 10:48:42.435759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.069 [2024-11-20 10:48:42.435812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.069 [2024-11-20 10:48:42.435825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.069 [2024-11-20 10:48:42.435832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.069 [2024-11-20 10:48:42.435839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.069 [2024-11-20 10:48:42.435853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.069 qpair failed and we were unable to recover it. 00:31:10.332 [2024-11-20 10:48:42.445852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.332 [2024-11-20 10:48:42.445906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.332 [2024-11-20 10:48:42.445919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.332 [2024-11-20 10:48:42.445926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.332 [2024-11-20 10:48:42.445933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.332 [2024-11-20 10:48:42.445947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.332 qpair failed and we were unable to recover it. 00:31:10.332 [2024-11-20 10:48:42.455908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.332 [2024-11-20 10:48:42.455969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.332 [2024-11-20 10:48:42.455982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.332 [2024-11-20 10:48:42.455989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.332 [2024-11-20 10:48:42.455996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.332 [2024-11-20 10:48:42.456010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.332 qpair failed and we were unable to recover it. 00:31:10.332 [2024-11-20 10:48:42.465934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.332 [2024-11-20 10:48:42.466014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.332 [2024-11-20 10:48:42.466031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.332 [2024-11-20 10:48:42.466039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.332 [2024-11-20 10:48:42.466046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.332 [2024-11-20 10:48:42.466060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.332 qpair failed and we were unable to recover it. 00:31:10.332 [2024-11-20 10:48:42.476011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.332 [2024-11-20 10:48:42.476070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.332 [2024-11-20 10:48:42.476084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.332 [2024-11-20 10:48:42.476091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.332 [2024-11-20 10:48:42.476098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.332 [2024-11-20 10:48:42.476112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.332 qpair failed and we were unable to recover it. 00:31:10.332 [2024-11-20 10:48:42.485992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.332 [2024-11-20 10:48:42.486045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.486058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.486065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.486072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.486086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.495952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.496028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.496041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.496049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.496056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.496070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.506026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.506072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.506086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.506093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.506103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.506117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.516104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.516194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.516208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.516215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.516222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.516236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.526080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.526132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.526145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.526153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.526163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.526178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.536106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.536156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.536173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.536181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.536187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.536201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.546188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.546243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.546256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.546264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.546270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.546285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.556220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.556289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.556302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.556309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.556316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.556330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.566163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.566263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.566277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.566284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.566290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.566305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.576214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.576264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.576277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.576284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.576291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.576305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.586268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.586319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.586332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.586339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.586345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.586359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.596306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.596360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.596376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.596383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.596390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.596404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.606310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.606361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.333 [2024-11-20 10:48:42.606376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.333 [2024-11-20 10:48:42.606384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.333 [2024-11-20 10:48:42.606391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.333 [2024-11-20 10:48:42.606406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.333 qpair failed and we were unable to recover it. 00:31:10.333 [2024-11-20 10:48:42.616361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.333 [2024-11-20 10:48:42.616435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.616448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.616455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.616462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.616477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.626350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.626395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.626409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.626417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.626423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.626437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.636400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.636454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.636467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.636474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.636485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.636498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.646420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.646475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.646488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.646495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.646501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.646515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.656450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.656501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.656515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.656523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.656529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.656543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.666462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.666510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.666523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.666530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.666537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.666550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.676521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.676577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.676590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.676597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.676604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.676618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.686505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.686557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.686570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.686578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.686584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.686597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.334 [2024-11-20 10:48:42.696518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.334 [2024-11-20 10:48:42.696567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.334 [2024-11-20 10:48:42.696580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.334 [2024-11-20 10:48:42.696587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.334 [2024-11-20 10:48:42.696594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.334 [2024-11-20 10:48:42.696607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.334 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.706532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.706587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.706600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.706607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.706614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.706628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.716638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.716720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.716733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.716740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.716748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.716762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.726669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.726736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.726752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.726760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.726766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.726780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.736618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.736674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.736687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.736695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.736701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.736715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.746537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.746591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.746604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.746612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.746618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.746632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.756734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.756791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.756806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.756813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.756820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.756838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.766721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.766773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.766787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.766795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.766805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.766819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.776739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.776785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.776799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.776806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.597 [2024-11-20 10:48:42.776813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.597 [2024-11-20 10:48:42.776826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.597 qpair failed and we were unable to recover it. 00:31:10.597 [2024-11-20 10:48:42.786773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.597 [2024-11-20 10:48:42.786819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.597 [2024-11-20 10:48:42.786833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.597 [2024-11-20 10:48:42.786840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.786847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.786861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.796858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.796909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.796923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.796930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.796937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.796951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.806820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.806879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.806904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.806913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.806921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.806941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.816771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.816833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.816848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.816856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.816862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.816878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.826882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.826972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.826989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.826997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.827004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.827018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.836939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.837005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.837030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.837039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.837047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.837066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.846835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.846896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.846920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.846929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.846937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.846958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.856858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.856919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.856947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.856956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.856965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.856984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.866956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.867004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.867019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.867027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.867033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.867048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.876919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.876971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.876985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.876993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.876999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.877013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.887047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.887099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.887112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.887119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.887126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.887140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.897069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.897116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.897129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.897136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.897147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.897166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.907131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.907212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.598 [2024-11-20 10:48:42.907226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.598 [2024-11-20 10:48:42.907233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.598 [2024-11-20 10:48:42.907240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.598 [2024-11-20 10:48:42.907255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.598 qpair failed and we were unable to recover it. 00:31:10.598 [2024-11-20 10:48:42.917181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.598 [2024-11-20 10:48:42.917271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.917285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.917292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.917300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.917314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.599 [2024-11-20 10:48:42.927170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.599 [2024-11-20 10:48:42.927222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.927236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.927243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.927250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.927264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.599 [2024-11-20 10:48:42.937220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.599 [2024-11-20 10:48:42.937289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.937303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.937310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.937317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.937332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.599 [2024-11-20 10:48:42.947189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.599 [2024-11-20 10:48:42.947239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.947253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.947260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.947267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.947281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.599 [2024-11-20 10:48:42.957270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.599 [2024-11-20 10:48:42.957325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.957338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.957346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.957352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.957367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.599 [2024-11-20 10:48:42.967261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.599 [2024-11-20 10:48:42.967315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.599 [2024-11-20 10:48:42.967328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.599 [2024-11-20 10:48:42.967335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.599 [2024-11-20 10:48:42.967342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.599 [2024-11-20 10:48:42.967356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.599 qpair failed and we were unable to recover it. 00:31:10.861 [2024-11-20 10:48:42.977309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.861 [2024-11-20 10:48:42.977379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.861 [2024-11-20 10:48:42.977392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.861 [2024-11-20 10:48:42.977400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.861 [2024-11-20 10:48:42.977406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.861 [2024-11-20 10:48:42.977420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.861 qpair failed and we were unable to recover it. 00:31:10.861 [2024-11-20 10:48:42.987300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.861 [2024-11-20 10:48:42.987348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.861 [2024-11-20 10:48:42.987365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.861 [2024-11-20 10:48:42.987372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.861 [2024-11-20 10:48:42.987378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.861 [2024-11-20 10:48:42.987392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.861 qpair failed and we were unable to recover it. 00:31:10.861 [2024-11-20 10:48:42.997388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.861 [2024-11-20 10:48:42.997442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.861 [2024-11-20 10:48:42.997455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.861 [2024-11-20 10:48:42.997462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.861 [2024-11-20 10:48:42.997469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.861 [2024-11-20 10:48:42.997483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.861 qpair failed and we were unable to recover it. 00:31:10.861 [2024-11-20 10:48:43.007343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.861 [2024-11-20 10:48:43.007394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.861 [2024-11-20 10:48:43.007407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.861 [2024-11-20 10:48:43.007415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.007421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.007435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.017404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.017478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.017491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.017499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.017506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.017519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.027383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.027430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.027444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.027451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.027462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.027476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.037363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.037416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.037432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.037440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.037446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.037461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.047454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.047503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.047517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.047524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.047531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.047545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.057476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.057528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.057541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.057548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.057555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.057569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.067511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.067557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.067570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.067578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.067584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.067598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.077580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.077635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.077648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.077655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.077662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.077676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.087534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.087587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.087600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.087608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.087614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.087628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.097584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.097636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.097650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.097657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.097664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.097678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.107619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.107668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.107681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.107688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.107695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.107708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.117692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.117747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.117763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.117770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.117777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.117791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.127678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.127730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.127743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.127750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.127757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.862 [2024-11-20 10:48:43.127770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.862 qpair failed and we were unable to recover it. 00:31:10.862 [2024-11-20 10:48:43.137616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.862 [2024-11-20 10:48:43.137665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.862 [2024-11-20 10:48:43.137678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.862 [2024-11-20 10:48:43.137685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.862 [2024-11-20 10:48:43.137691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.137705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.147690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.147757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.147770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.147777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.147784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.147798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.157802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.157857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.157873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.157880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.157890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.157908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.167797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.167848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.167862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.167869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.167876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.167890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.177812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.177862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.177875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.177882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.177889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.177902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.187828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.187894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.187907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.187915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.187922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.187935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.197905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.197957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.197970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.197977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.197984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.197997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.207882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.207932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.207945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.207953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.207959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.207973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.217899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.217945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.217958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.217965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.217972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.217985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:10.863 [2024-11-20 10:48:43.227941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.863 [2024-11-20 10:48:43.227991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.863 [2024-11-20 10:48:43.228004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.863 [2024-11-20 10:48:43.228011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.863 [2024-11-20 10:48:43.228017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:10.863 [2024-11-20 10:48:43.228031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.863 qpair failed and we were unable to recover it. 00:31:11.125 [2024-11-20 10:48:43.238016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.125 [2024-11-20 10:48:43.238093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.125 [2024-11-20 10:48:43.238106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.125 [2024-11-20 10:48:43.238114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.125 [2024-11-20 10:48:43.238120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.125 [2024-11-20 10:48:43.238135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.125 qpair failed and we were unable to recover it. 00:31:11.125 [2024-11-20 10:48:43.247995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.125 [2024-11-20 10:48:43.248043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.125 [2024-11-20 10:48:43.248061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.125 [2024-11-20 10:48:43.248068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.125 [2024-11-20 10:48:43.248074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.125 [2024-11-20 10:48:43.248089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.125 qpair failed and we were unable to recover it. 00:31:11.125 [2024-11-20 10:48:43.258025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.258075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.258088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.258095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.258102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.258116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.268048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.268094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.268107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.268114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.268121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.268135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.278115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.278181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.278194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.278202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.278208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.278222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.288104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.288165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.288178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.288188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.288195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.288209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.298144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.298191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.298204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.298212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.298219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.298233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.308182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.308303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.308316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.308323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.308330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.308344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.318206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.318272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.318285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.318293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.318299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.318313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.328286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.328342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.328355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.328363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.328369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.328383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.338243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.338289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.338302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.338310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.338316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.338330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.348251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.348306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.348319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.348327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.348334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.348348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.358336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.358389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.358402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.358409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.358416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.358429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.368330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.368385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.368398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.368405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.368412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.368425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.378325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.126 [2024-11-20 10:48:43.378398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.126 [2024-11-20 10:48:43.378415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.126 [2024-11-20 10:48:43.378422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.126 [2024-11-20 10:48:43.378428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.126 [2024-11-20 10:48:43.378442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.126 qpair failed and we were unable to recover it. 00:31:11.126 [2024-11-20 10:48:43.388382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.388435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.388448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.388455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.388461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.388475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.398406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.398460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.398474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.398482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.398488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.398502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.408436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.408483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.408496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.408503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.408510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.408523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.418444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.418492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.418504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.418515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.418522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.418535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.428481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.428530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.428543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.428550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.428557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.428571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.438547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.438600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.438613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.438620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.438627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.438641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.448537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.448589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.448602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.448610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.448617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.448630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.458578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.458625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.458640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.458648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.458654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.458669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.468597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.468642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.468656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.468663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.468670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.468684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.478667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.478719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.478731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.478739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.478745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.478759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.127 [2024-11-20 10:48:43.488662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.127 [2024-11-20 10:48:43.488707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.127 [2024-11-20 10:48:43.488720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.127 [2024-11-20 10:48:43.488727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.127 [2024-11-20 10:48:43.488734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.127 [2024-11-20 10:48:43.488748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.127 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.498663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.498711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.498724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.498731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.498738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.498752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.508693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.508742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.508759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.508766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.508773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.508787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.518771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.518828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.518843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.518851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.518857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.518874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.528731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.528782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.528795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.528803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.528809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.528823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.538807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.538897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.538911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.538918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.538925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.538939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.548810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.548856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.548869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.548880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.548886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.548900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.558873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.558926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.558940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.558947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.558953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.558967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.568871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.568935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.568948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.568956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.568962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.568976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.578875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.578928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.578942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.578949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.578955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.578969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.588781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.588828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.588841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.588848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.588855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.588868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.598991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.599046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.599060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.599067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.599073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.599087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.390 qpair failed and we were unable to recover it. 00:31:11.390 [2024-11-20 10:48:43.609021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.390 [2024-11-20 10:48:43.609078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.390 [2024-11-20 10:48:43.609093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.390 [2024-11-20 10:48:43.609100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.390 [2024-11-20 10:48:43.609107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.390 [2024-11-20 10:48:43.609121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.618992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.619042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.619055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.619063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.619069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.619083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.629033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.629084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.629097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.629105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.629111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.629125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.639097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.639157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.639174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.639181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.639188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.639202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.649087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.649148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.649164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.649172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.649178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.649192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.659074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.659121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.659136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.659143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.659150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.659168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.669130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.669184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.669198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.669205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.669212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.669226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.679084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.679136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.679149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.679164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.679171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.679186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.689209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.689257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.689270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.689278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.689284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.689298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.699210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.699258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.699271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.699278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.699285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.699299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.709222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.709275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.709288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.709295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.709301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.709315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.719286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.719345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.719357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.719365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.719371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.719385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.729322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.729376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.729389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.729396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.729402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.729416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.391 [2024-11-20 10:48:43.739395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.391 [2024-11-20 10:48:43.739443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.391 [2024-11-20 10:48:43.739457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.391 [2024-11-20 10:48:43.739464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.391 [2024-11-20 10:48:43.739470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.391 [2024-11-20 10:48:43.739484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.391 qpair failed and we were unable to recover it. 00:31:11.392 [2024-11-20 10:48:43.749364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.392 [2024-11-20 10:48:43.749411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.392 [2024-11-20 10:48:43.749424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.392 [2024-11-20 10:48:43.749431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.392 [2024-11-20 10:48:43.749438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.392 [2024-11-20 10:48:43.749452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.392 qpair failed and we were unable to recover it. 00:31:11.392 [2024-11-20 10:48:43.759405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.392 [2024-11-20 10:48:43.759464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.392 [2024-11-20 10:48:43.759477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.392 [2024-11-20 10:48:43.759484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.392 [2024-11-20 10:48:43.759491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.392 [2024-11-20 10:48:43.759504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.392 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.769478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.769555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.769568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.769575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.769581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.769595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.779443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.779491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.779504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.779512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.779518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.779532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.789342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.789388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.789402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.789410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.789416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.789431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.799517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.799570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.799584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.799591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.799598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.799611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.809505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.809557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.809570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.809581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.809587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.809601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.819518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.819573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.819586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.819593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.819600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.819614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.829566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.829612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.829625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.829632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.829638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.829652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.839647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.839700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.839713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.839720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.839727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.839740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.849591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.849676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.849689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.849697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.849704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.849718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.859552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.859637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.859651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.859658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.859665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.859679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.869761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.869815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.654 [2024-11-20 10:48:43.869829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.654 [2024-11-20 10:48:43.869837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.654 [2024-11-20 10:48:43.869844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.654 [2024-11-20 10:48:43.869858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.654 qpair failed and we were unable to recover it. 00:31:11.654 [2024-11-20 10:48:43.879777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.654 [2024-11-20 10:48:43.879830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.879843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.879850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.879857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.879871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.889768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.889867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.889882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.889889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.889896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.889910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.899785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.899835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.899848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.899855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.899862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.899876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.909806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.909901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.909914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.909922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.909929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.909942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.919858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.919913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.919926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.919933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.919940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.919954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.929829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.929879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.929893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.929900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.929907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.929921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.939859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.939910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.939924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.939934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.939941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.939955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.949893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.949941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.949955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.949962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.949969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.949982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.959927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.959977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.959991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.959998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.960005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.960019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.969956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.970013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.970026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.970034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.970040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.970054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.979978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.980027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.980041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.980048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.980055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.980072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:43.989999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:43.990057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:43.990070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:43.990079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:43.990086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:43.990100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.655 [2024-11-20 10:48:44.000062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.655 [2024-11-20 10:48:44.000115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.655 [2024-11-20 10:48:44.000128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.655 [2024-11-20 10:48:44.000136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.655 [2024-11-20 10:48:44.000142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.655 [2024-11-20 10:48:44.000156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.655 qpair failed and we were unable to recover it. 00:31:11.656 [2024-11-20 10:48:44.010068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.656 [2024-11-20 10:48:44.010115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.656 [2024-11-20 10:48:44.010128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.656 [2024-11-20 10:48:44.010136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.656 [2024-11-20 10:48:44.010142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.656 [2024-11-20 10:48:44.010157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.656 [2024-11-20 10:48:44.020048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.656 [2024-11-20 10:48:44.020095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.656 [2024-11-20 10:48:44.020108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.656 [2024-11-20 10:48:44.020115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.656 [2024-11-20 10:48:44.020122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.656 [2024-11-20 10:48:44.020137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.656 qpair failed and we were unable to recover it. 00:31:11.917 [2024-11-20 10:48:44.030091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.917 [2024-11-20 10:48:44.030179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.917 [2024-11-20 10:48:44.030193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.917 [2024-11-20 10:48:44.030201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.917 [2024-11-20 10:48:44.030208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.917 [2024-11-20 10:48:44.030222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.917 qpair failed and we were unable to recover it. 00:31:11.917 [2024-11-20 10:48:44.040182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.917 [2024-11-20 10:48:44.040267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.917 [2024-11-20 10:48:44.040280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.917 [2024-11-20 10:48:44.040288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.917 [2024-11-20 10:48:44.040295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.917 [2024-11-20 10:48:44.040309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.050138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.050191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.050205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.050212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.050219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.050233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.060183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.060233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.060247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.060255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.060261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.060275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.070211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.070289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.070302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.070313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.070320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.070334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.080321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.080428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.080443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.080451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.080457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.080472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.090230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.090280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.090293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.090300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.090306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.090320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.100278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.100340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.100354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.100361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.100368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.100382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.110321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.110370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.110383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.110390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.110397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.110414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.120342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.120390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.120403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.120410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.120417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.120431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.130345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.130391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.130404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.130411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.130418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.130432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.140381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.140430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.140443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.140450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.140457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.140471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.150399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.150441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.150454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.150462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.150468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.150483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.160456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.160508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.160521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.160528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.160535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.918 [2024-11-20 10:48:44.160549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.918 qpair failed and we were unable to recover it. 00:31:11.918 [2024-11-20 10:48:44.170492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.918 [2024-11-20 10:48:44.170543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.918 [2024-11-20 10:48:44.170557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.918 [2024-11-20 10:48:44.170564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.918 [2024-11-20 10:48:44.170571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.170585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.180499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.180546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.180560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.180567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.180573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.180587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.190533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.190579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.190591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.190599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.190606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.190619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.200540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.200601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.200614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.200625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.200632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.200646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.210605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.210653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.210667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.210674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.210681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.210694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.220616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.220659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.220673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.220680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.220687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.220700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.230630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.230672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.230685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.230692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.230699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.230712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.240659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.240705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.240718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.240725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.240732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.240749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.250697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.250786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.250799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.250807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.250814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.250827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.260716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.260761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.260774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.260781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.260788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.260802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.270737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.270787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.270800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.270807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.270814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.270828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:11.919 [2024-11-20 10:48:44.280672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.919 [2024-11-20 10:48:44.280757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.919 [2024-11-20 10:48:44.280770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.919 [2024-11-20 10:48:44.280778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.919 [2024-11-20 10:48:44.280785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:11.919 [2024-11-20 10:48:44.280799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.919 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.290793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.290848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.290862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.290869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.290876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.290890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.183 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.300830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.300881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.300906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.300915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.300922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.300942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.183 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.310852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.310943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.310958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.310966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.310973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.310987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.183 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.320906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.321034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.321059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.321068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.321076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.321096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.183 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.330911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.330963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.330978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.330990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.330997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.331012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.183 qpair failed and we were unable to recover it. 00:31:12.183 [2024-11-20 10:48:44.340938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.183 [2024-11-20 10:48:44.340984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.183 [2024-11-20 10:48:44.340998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.183 [2024-11-20 10:48:44.341005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.183 [2024-11-20 10:48:44.341012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.183 [2024-11-20 10:48:44.341026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.350950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.350994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.351008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.351015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.351022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.351036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.360849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.360896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.360910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.360917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.360924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.360938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.371022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.371068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.371082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.371089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.371096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.371117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.381046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.381096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.381110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.381117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.381124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.381137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.391063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.391115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.391128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.391135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.391142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.391155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.401056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.401102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.401115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.401122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.401128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.401142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.411091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.411177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.411191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.411198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.411206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.411220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.421134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.421186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.421199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.421207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.421214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.421227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.431162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.431210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.431224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.431232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.431238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.431253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.441070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.441116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.441130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.441137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.441144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.441162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.451226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.451274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.451287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.451295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.451301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.451315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.461247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.461301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.461316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.461327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.461337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.461352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.184 [2024-11-20 10:48:44.471240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.184 [2024-11-20 10:48:44.471283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.184 [2024-11-20 10:48:44.471297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.184 [2024-11-20 10:48:44.471304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.184 [2024-11-20 10:48:44.471311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.184 [2024-11-20 10:48:44.471326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.184 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.481342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.481392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.481405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.481413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.481419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.481433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.491354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.491401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.491416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.491423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.491430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.491444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.501218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.501266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.501278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.501286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.501292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.501310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.511347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.511389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.511402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.511410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.511416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.511430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.521415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.521515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.521528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.521536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.521543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.521557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.531438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.531524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.531537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.531544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.531551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.531565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.541469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.541547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.541563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.541570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.541577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.541591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.185 [2024-11-20 10:48:44.551515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.185 [2024-11-20 10:48:44.551564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.185 [2024-11-20 10:48:44.551577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.185 [2024-11-20 10:48:44.551585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.185 [2024-11-20 10:48:44.551591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.185 [2024-11-20 10:48:44.551605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.185 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.561578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.561625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.561638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.561646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.561653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.561668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.571554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.571604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.571617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.571624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.571631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.571644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.581579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.581635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.581648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.581655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.581662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.581676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.591605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.591654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.591667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.591678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.591684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.591698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.601632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.601716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.601730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.601738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.601745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.601759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.611618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.611678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.611693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.611700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.611707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.611722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.621690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.621739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.621753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.621760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.621767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.621780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.631633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.631692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.631706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.631713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.631720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.631739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.641629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.641678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.641691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.641699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.641705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.641719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.448 qpair failed and we were unable to recover it. 00:31:12.448 [2024-11-20 10:48:44.651751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.448 [2024-11-20 10:48:44.651804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.448 [2024-11-20 10:48:44.651817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.448 [2024-11-20 10:48:44.651824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.448 [2024-11-20 10:48:44.651831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.448 [2024-11-20 10:48:44.651845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.661793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.661847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.661860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.661867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.661873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.661887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.671813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.671859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.671873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.671881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.671888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.671902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.681859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.681913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.681928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.681935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.681942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.681961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.691856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.691918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.691932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.691939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.691946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.691960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.701875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.701923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.701948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.701957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.701964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.701983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.711978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.712020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.712034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.712042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.712049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.712064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.721968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.722049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.722063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.722075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.722082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.722096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.732007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.732056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.732070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.732077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.732084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.732098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.741899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.741944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.741958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.741965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.741971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.741985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.752066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.752130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.752143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.752150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.752157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.752177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.762140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.762192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.762207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.762214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.762220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.762238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.771982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.772077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.772091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.772098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.772104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.772118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.449 qpair failed and we were unable to recover it. 00:31:12.449 [2024-11-20 10:48:44.782148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.449 [2024-11-20 10:48:44.782195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.449 [2024-11-20 10:48:44.782208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.449 [2024-11-20 10:48:44.782216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.449 [2024-11-20 10:48:44.782222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.449 [2024-11-20 10:48:44.782236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.450 qpair failed and we were unable to recover it. 00:31:12.450 [2024-11-20 10:48:44.792155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.450 [2024-11-20 10:48:44.792197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.450 [2024-11-20 10:48:44.792210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.450 [2024-11-20 10:48:44.792218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.450 [2024-11-20 10:48:44.792224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.450 [2024-11-20 10:48:44.792238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.450 qpair failed and we were unable to recover it. 00:31:12.450 [2024-11-20 10:48:44.802193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.450 [2024-11-20 10:48:44.802240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.450 [2024-11-20 10:48:44.802253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.450 [2024-11-20 10:48:44.802260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.450 [2024-11-20 10:48:44.802267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.450 [2024-11-20 10:48:44.802281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.450 qpair failed and we were unable to recover it. 00:31:12.450 [2024-11-20 10:48:44.812234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.450 [2024-11-20 10:48:44.812281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.450 [2024-11-20 10:48:44.812295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.450 [2024-11-20 10:48:44.812303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.450 [2024-11-20 10:48:44.812310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.450 [2024-11-20 10:48:44.812323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.450 qpair failed and we were unable to recover it. 00:31:12.738 [2024-11-20 10:48:44.822242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.738 [2024-11-20 10:48:44.822286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.738 [2024-11-20 10:48:44.822300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.738 [2024-11-20 10:48:44.822307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.738 [2024-11-20 10:48:44.822314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.738 [2024-11-20 10:48:44.822328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.738 qpair failed and we were unable to recover it. 00:31:12.738 [2024-11-20 10:48:44.832259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.738 [2024-11-20 10:48:44.832302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.738 [2024-11-20 10:48:44.832315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.738 [2024-11-20 10:48:44.832323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.738 [2024-11-20 10:48:44.832329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.738 [2024-11-20 10:48:44.832343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.738 qpair failed and we were unable to recover it. 00:31:12.738 [2024-11-20 10:48:44.842298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.842357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.842370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.842377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.842384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.842398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.852339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.852390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.852404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.852415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.852422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.852436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.862347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.862397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.862410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.862417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.862424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.862438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.872331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.872374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.872387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.872394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.872401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.872414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.882275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.882319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.882332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.882339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.882346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.882360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.892470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.892517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.892530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.892537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.892543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.892560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.902328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.902376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.902389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.902396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.902403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.902417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.912451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.912530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.912544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.912552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.912559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.912573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.922514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.922564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.922577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.922584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.922591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.922605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.932563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.932611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.932624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.932632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.932639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.932653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.942565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.942610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.942624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.942631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.942638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.942652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.952464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.952516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.952529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.952536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.952542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.952556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.962633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.739 [2024-11-20 10:48:44.962681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.739 [2024-11-20 10:48:44.962695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.739 [2024-11-20 10:48:44.962702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.739 [2024-11-20 10:48:44.962709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.739 [2024-11-20 10:48:44.962722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.739 qpair failed and we were unable to recover it. 00:31:12.739 [2024-11-20 10:48:44.972652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:44.972705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:44.972718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:44.972725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:44.972731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:44.972745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:44.982540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:44.982598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:44.982611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:44.982622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:44.982628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:44.982642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:44.992749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:44.992795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:44.992808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:44.992815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:44.992822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:44.992836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.002732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.002789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.002803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.002810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.002816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.002830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.012768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.012819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.012832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.012839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.012846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.012859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.022780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.022822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.022835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.022843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.022850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.022866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.032841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.032886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.032900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.032907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.032914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.032927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.042829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.042894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.042907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.042915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.042922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.042935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.052875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.052962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.052976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.052983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.052990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.053004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.062872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.062921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.062945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.062954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.062961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.062981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.072913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.072967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.072993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.073002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.073009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.073029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.082941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.082989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.083004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.083012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.083019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.083034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.092964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.740 [2024-11-20 10:48:45.093011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.740 [2024-11-20 10:48:45.093025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.740 [2024-11-20 10:48:45.093032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.740 [2024-11-20 10:48:45.093039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.740 [2024-11-20 10:48:45.093053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.740 qpair failed and we were unable to recover it. 00:31:12.740 [2024-11-20 10:48:45.103009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.741 [2024-11-20 10:48:45.103066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.741 [2024-11-20 10:48:45.103079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.741 [2024-11-20 10:48:45.103087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.741 [2024-11-20 10:48:45.103094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:12.741 [2024-11-20 10:48:45.103108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.741 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.113023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.113073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.113090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.113098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.113104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.113118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.123022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.123070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.123085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.123092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.123098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.123113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.133095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.133139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.133153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.133171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.133177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.133192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.143108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.143154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.143171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.143179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.143185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.143200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.153130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.153181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.153195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.153202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.153209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.153226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.163176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.163238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.001 [2024-11-20 10:48:45.163251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.001 [2024-11-20 10:48:45.163258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.001 [2024-11-20 10:48:45.163264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.001 [2024-11-20 10:48:45.163278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.001 qpair failed and we were unable to recover it. 00:31:13.001 [2024-11-20 10:48:45.173188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.001 [2024-11-20 10:48:45.173235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.173249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.173256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.173263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.173276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.184105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.184151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.184167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.184175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.184181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.184195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.193255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.193301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.193314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.193321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.193328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.193342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.203260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.203305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.203319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.203326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.203332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.203346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.213304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.213358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.213371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.213378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.213384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.213398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.223298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.223344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.223357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.223365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.223371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.223385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.233328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.233405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.233418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.233425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.233433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.233446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.243403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.243452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.243468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.243476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.243482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.243496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.253411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.253462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.253475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.253482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.253489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.253502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.263431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.263472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.263485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.263493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.263499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.263513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.273446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.273491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.273504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.273511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.273518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.273532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.283506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.283597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.283611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.283618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.283624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.283642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.293531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.293578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.293591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.293599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.293606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.293619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.303543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.303589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.303602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.303609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.303615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.303629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.313578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.313661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.313674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.313682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.313688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.313702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.323609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.323652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.323667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.323675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.323681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.323695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.333639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.333688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.002 [2024-11-20 10:48:45.333702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.002 [2024-11-20 10:48:45.333709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.002 [2024-11-20 10:48:45.333715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.002 [2024-11-20 10:48:45.333729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.002 qpair failed and we were unable to recover it. 00:31:13.002 [2024-11-20 10:48:45.343619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.002 [2024-11-20 10:48:45.343664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.003 [2024-11-20 10:48:45.343677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.003 [2024-11-20 10:48:45.343684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.003 [2024-11-20 10:48:45.343691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.003 [2024-11-20 10:48:45.343705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.003 qpair failed and we were unable to recover it. 00:31:13.003 [2024-11-20 10:48:45.353609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.003 [2024-11-20 10:48:45.353704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.003 [2024-11-20 10:48:45.353717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.003 [2024-11-20 10:48:45.353725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.003 [2024-11-20 10:48:45.353732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.003 [2024-11-20 10:48:45.353746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.003 qpair failed and we were unable to recover it. 00:31:13.003 [2024-11-20 10:48:45.363688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.003 [2024-11-20 10:48:45.363733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.003 [2024-11-20 10:48:45.363746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.003 [2024-11-20 10:48:45.363753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.003 [2024-11-20 10:48:45.363760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.003 [2024-11-20 10:48:45.363773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.003 qpair failed and we were unable to recover it. 00:31:13.263 [2024-11-20 10:48:45.373706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.263 [2024-11-20 10:48:45.373758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.263 [2024-11-20 10:48:45.373774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.263 [2024-11-20 10:48:45.373781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.263 [2024-11-20 10:48:45.373788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.263 [2024-11-20 10:48:45.373802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-11-20 10:48:45.383758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.263 [2024-11-20 10:48:45.383802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.263 [2024-11-20 10:48:45.383815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.263 [2024-11-20 10:48:45.383822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.263 [2024-11-20 10:48:45.383829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.263 [2024-11-20 10:48:45.383842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.263 qpair failed and we were unable to recover it. 00:31:13.263 [2024-11-20 10:48:45.393763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.263 [2024-11-20 10:48:45.393808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.263 [2024-11-20 10:48:45.393821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.393829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.393835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.393849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.403816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.403863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.403876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.403883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.403889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.403903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.413728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.413780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.413794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.413801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.413808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.413828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.423871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.423960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.423974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.423981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.423988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.424002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.433910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.433960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.433985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.433994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.434001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.434021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.443962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.444047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.444062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.444069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.444076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.444091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.453955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.454006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.454019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.454027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.454033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.454048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.463962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.464006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.464021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.464028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.464035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.464049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.473991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.474085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.474099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.474106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.474113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.474126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.484036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.484083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.484097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.484104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.484110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.484124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.494061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.494107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.494120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.494127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.494134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.494147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.504085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.504131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.504148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.504155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.504165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.504179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.514111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.514162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.514175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.264 [2024-11-20 10:48:45.514183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.264 [2024-11-20 10:48:45.514189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.264 [2024-11-20 10:48:45.514204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.264 qpair failed and we were unable to recover it. 00:31:13.264 [2024-11-20 10:48:45.524145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.264 [2024-11-20 10:48:45.524195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.264 [2024-11-20 10:48:45.524209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.524216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.524223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.524237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.534176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.534235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.534250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.534258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.534264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.534279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.544148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.544194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.544208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.544215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.544225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.544240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.554211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.554288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.554302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.554309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.554316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.554330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.564209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.564288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.564301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.564308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.564315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.564329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.574313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.574363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.574376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.574384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.574390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.574404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.584293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.584351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.584364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.584371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.584378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.584392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.594312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.594357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.594370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.594378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.594384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.594399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.604336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.604391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.604405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.604413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.604420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.604434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.614397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.614441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.614455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.614462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.614469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.614482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.624399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.624448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.624463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.624470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.624477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.624490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.265 [2024-11-20 10:48:45.634415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.265 [2024-11-20 10:48:45.634458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.265 [2024-11-20 10:48:45.634475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.265 [2024-11-20 10:48:45.634483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.265 [2024-11-20 10:48:45.634489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.265 [2024-11-20 10:48:45.634503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.265 qpair failed and we were unable to recover it. 00:31:13.526 [2024-11-20 10:48:45.644485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.644534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.644547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.644555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.644562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.644576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.654498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.654550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.654563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.654571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.654577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.654591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.664520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.664565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.664579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.664586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.664593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.664607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.674551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.674593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.674606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.674614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.674624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.674638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.684482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.684530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.684544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.684551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.684558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.684572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.694621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.694666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.694680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.694687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.694694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.694707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.704493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.704537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.704552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.704560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.704567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.704581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.714651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.714695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.714709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.714716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.714723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.714737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.724641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.724690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.724703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.724711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.724717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.724731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.734723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.734767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.734780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.734788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.734794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.734808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.744781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.744855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.744870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.744877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.744884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.744898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.754772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.754864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.754878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.754885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.754891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.754905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.527 [2024-11-20 10:48:45.764762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.527 [2024-11-20 10:48:45.764809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.527 [2024-11-20 10:48:45.764825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.527 [2024-11-20 10:48:45.764833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.527 [2024-11-20 10:48:45.764840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.527 [2024-11-20 10:48:45.764854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.527 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.774796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.774845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.774858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.774866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.774872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.774886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.784833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.784882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.784896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.784903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.784910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.784924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.794852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.794897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.794910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.794917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.794924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.794937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.804883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.804929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.804942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.804950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.804959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.804973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.814938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.814992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.815005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.815012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.815019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.815032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.824938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.824997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.825010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.825018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.825025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.825038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.834963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.835007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.835020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.835028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.835034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.835048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.844993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.845079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.845092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.845100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.845107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.845121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.854910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.854959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.854975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.854983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.854990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.855006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.865028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.865070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.865084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.865092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.865099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.865113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.875089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.875134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.875148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.875155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.875165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.875179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.885072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.885117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.885131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.885138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.885145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.885162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.528 [2024-11-20 10:48:45.895094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.528 [2024-11-20 10:48:45.895144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.528 [2024-11-20 10:48:45.895164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.528 [2024-11-20 10:48:45.895172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.528 [2024-11-20 10:48:45.895179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.528 [2024-11-20 10:48:45.895193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.528 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.905128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.905179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.905193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.905200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.905207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.905221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.915186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.915241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.915254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.915262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.915268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.915282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.925076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.925140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.925153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.925165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.925172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.925185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.935238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.935288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.935301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.935309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.935319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.935334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.945911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.945972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.945985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.945992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.945999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.946013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.955249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.955298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.955313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.955320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.955327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.955341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.965319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.965369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.965382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.965390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.965396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.965410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.975355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.975401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.975415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.975422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.975428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.975443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.985379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.985427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.985440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.985447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.985454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.985468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:45.995398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:45.995442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:45.995455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:45.995462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:45.995469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:45.995482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.005421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.005468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:46.005481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:46.005488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:46.005494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:46.005508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.015441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.015488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:46.015502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:46.015509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:46.015516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:46.015530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.025445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.025490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:46.025506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:46.025513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:46.025520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:46.025534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.035459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.035560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:46.035575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:46.035583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:46.035589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:46.035604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.045478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.045572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.791 [2024-11-20 10:48:46.045585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.791 [2024-11-20 10:48:46.045593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.791 [2024-11-20 10:48:46.045600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.791 [2024-11-20 10:48:46.045613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-11-20 10:48:46.055531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.791 [2024-11-20 10:48:46.055579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.055592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.055600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.055607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.055621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.065564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.065609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.065622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.065629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.065640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.065654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.075606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.075682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.075695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.075702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.075709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.075723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.085653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.085697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.085711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.085718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.085724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.085738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.095665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.095732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.095745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.095752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.095759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.095773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.105679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.105723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.105737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.105744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.105751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.105765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.115704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.115754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.115768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.115775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.115782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.115799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.125612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.125661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.125676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.125683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.125690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.125705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.135785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.135840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.135854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.135861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.135868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.135881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.145672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.145719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.145732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.145739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.145746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.145759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-11-20 10:48:46.155821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.792 [2024-11-20 10:48:46.155872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.792 [2024-11-20 10:48:46.155889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.792 [2024-11-20 10:48:46.155897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.792 [2024-11-20 10:48:46.155903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:13.792 [2024-11-20 10:48:46.155918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.792 qpair failed and we were unable to recover it. 00:31:14.053 [2024-11-20 10:48:46.165859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.053 [2024-11-20 10:48:46.165906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.053 [2024-11-20 10:48:46.165920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.053 [2024-11-20 10:48:46.165927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.053 [2024-11-20 10:48:46.165934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.053 [2024-11-20 10:48:46.165947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.053 qpair failed and we were unable to recover it. 00:31:14.053 [2024-11-20 10:48:46.175891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.053 [2024-11-20 10:48:46.175976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.053 [2024-11-20 10:48:46.175989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.053 [2024-11-20 10:48:46.175998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.053 [2024-11-20 10:48:46.176004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.053 [2024-11-20 10:48:46.176018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.053 qpair failed and we were unable to recover it. 00:31:14.053 [2024-11-20 10:48:46.185888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.053 [2024-11-20 10:48:46.185958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.185983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.185992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.186000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.186020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.195925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.195970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.195984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.195992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.196004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.196019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.205960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.206008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.206022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.206029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.206036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.206050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.215988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.216034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.216048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.216056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.216062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.216076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.225999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.226049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.226062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.226070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.226076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.226090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.235999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.236038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.236052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.236060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.236067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.236081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.246067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.246114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.246127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.246135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.246141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.246155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.256101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.256149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.256166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.256173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.256180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.256194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.266113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.266154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.266171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.266178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.266185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.266199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.276144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.276192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.276205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.276212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.276219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.054 [2024-11-20 10:48:46.276233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.054 qpair failed and we were unable to recover it. 00:31:14.054 [2024-11-20 10:48:46.286166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.054 [2024-11-20 10:48:46.286211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.054 [2024-11-20 10:48:46.286228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.054 [2024-11-20 10:48:46.286235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.054 [2024-11-20 10:48:46.286242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.286256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.296232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.296279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.296292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.296300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.296306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.296320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.306237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.306287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.306300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.306307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.306314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.306329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.316153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.316213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.316226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.316234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.316241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.316254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.326299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.326348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.326361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.326369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.326380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.326394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.336332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.336383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.336397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.336404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.336411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.336425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.346218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.346271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.346284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.346291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.346298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.346311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.356380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.356426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.356439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.356447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.356454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.356468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.366293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.366343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.366358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.366365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.366371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.366385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.376440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.376488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.376501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.376508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.376515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.376528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.386451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.386540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.386554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.055 [2024-11-20 10:48:46.386561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.055 [2024-11-20 10:48:46.386569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.055 [2024-11-20 10:48:46.386582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.055 qpair failed and we were unable to recover it. 00:31:14.055 [2024-11-20 10:48:46.396342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.055 [2024-11-20 10:48:46.396383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.055 [2024-11-20 10:48:46.396396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.056 [2024-11-20 10:48:46.396403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.056 [2024-11-20 10:48:46.396410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.056 [2024-11-20 10:48:46.396423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.056 qpair failed and we were unable to recover it. 00:31:14.056 [2024-11-20 10:48:46.406482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.056 [2024-11-20 10:48:46.406533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.056 [2024-11-20 10:48:46.406546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.056 [2024-11-20 10:48:46.406554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.056 [2024-11-20 10:48:46.406560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.056 [2024-11-20 10:48:46.406574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.056 qpair failed and we were unable to recover it. 00:31:14.056 [2024-11-20 10:48:46.416518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.056 [2024-11-20 10:48:46.416581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.056 [2024-11-20 10:48:46.416597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.056 [2024-11-20 10:48:46.416604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.056 [2024-11-20 10:48:46.416611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.056 [2024-11-20 10:48:46.416625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.056 qpair failed and we were unable to recover it. 00:31:14.317 [2024-11-20 10:48:46.426570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.317 [2024-11-20 10:48:46.426619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.317 [2024-11-20 10:48:46.426632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.317 [2024-11-20 10:48:46.426640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.317 [2024-11-20 10:48:46.426646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.317 [2024-11-20 10:48:46.426660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.317 qpair failed and we were unable to recover it. 00:31:14.317 [2024-11-20 10:48:46.436564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.317 [2024-11-20 10:48:46.436616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.317 [2024-11-20 10:48:46.436629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.436636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.436643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.436656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.446601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.446652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.446666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.446673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.446680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.446693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.456656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.456706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.456719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.456727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.456737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.456750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.466664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.466708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.466722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.466729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.466736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.466749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.476683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.476735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.476748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.476755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.476762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.476775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.486763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.486812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.486826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.486834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.486840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.486854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.496786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.496869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.496884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.496892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.496899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.496912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.506657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.506723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.506736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.506744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.506750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.506764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.516795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.516842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.516856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.516863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.516870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.516883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.526795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.526842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.526859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.526867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.318 [2024-11-20 10:48:46.526874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.318 [2024-11-20 10:48:46.526888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.318 qpair failed and we were unable to recover it. 00:31:14.318 [2024-11-20 10:48:46.536833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.318 [2024-11-20 10:48:46.536883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.318 [2024-11-20 10:48:46.536897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.318 [2024-11-20 10:48:46.536904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.536911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.536925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.546839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.546884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.546900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.546908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.546915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.546929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.556883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.556932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.556945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.556953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.556959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.556973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.566915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.566971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.566984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.566992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.566999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.567012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.576944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.577001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.577026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.577035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.577042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.577062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.586970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.587027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.587042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.587049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.587060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.587076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.596992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.597038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.597052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.597059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.597066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.597081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.606980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.607025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.607041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.607049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.607055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.607070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.616915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.616964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.616977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.616985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.616991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.617005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.627042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.627085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.627099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.627106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.627113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.627127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.637104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.319 [2024-11-20 10:48:46.637147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.319 [2024-11-20 10:48:46.637164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.319 [2024-11-20 10:48:46.637172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.319 [2024-11-20 10:48:46.637178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.319 [2024-11-20 10:48:46.637193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.319 qpair failed and we were unable to recover it. 00:31:14.319 [2024-11-20 10:48:46.647127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.320 [2024-11-20 10:48:46.647203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.320 [2024-11-20 10:48:46.647217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.320 [2024-11-20 10:48:46.647224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.320 [2024-11-20 10:48:46.647230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.320 [2024-11-20 10:48:46.647244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.320 qpair failed and we were unable to recover it. 00:31:14.320 [2024-11-20 10:48:46.657118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.320 [2024-11-20 10:48:46.657192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.320 [2024-11-20 10:48:46.657205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.320 [2024-11-20 10:48:46.657213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.320 [2024-11-20 10:48:46.657220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.320 [2024-11-20 10:48:46.657234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.320 qpair failed and we were unable to recover it. 00:31:14.320 [2024-11-20 10:48:46.667172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.320 [2024-11-20 10:48:46.667216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.320 [2024-11-20 10:48:46.667230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.320 [2024-11-20 10:48:46.667237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.320 [2024-11-20 10:48:46.667243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.320 [2024-11-20 10:48:46.667258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.320 qpair failed and we were unable to recover it. 00:31:14.320 [2024-11-20 10:48:46.677172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.320 [2024-11-20 10:48:46.677217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.320 [2024-11-20 10:48:46.677233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.320 [2024-11-20 10:48:46.677241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.320 [2024-11-20 10:48:46.677247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.320 [2024-11-20 10:48:46.677261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.320 qpair failed and we were unable to recover it. 00:31:14.320 [2024-11-20 10:48:46.687239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.320 [2024-11-20 10:48:46.687288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.320 [2024-11-20 10:48:46.687303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.320 [2024-11-20 10:48:46.687310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.320 [2024-11-20 10:48:46.687317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.320 [2024-11-20 10:48:46.687331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.320 qpair failed and we were unable to recover it. 00:31:14.583 [2024-11-20 10:48:46.697258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.583 [2024-11-20 10:48:46.697304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.583 [2024-11-20 10:48:46.697317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.583 [2024-11-20 10:48:46.697324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.583 [2024-11-20 10:48:46.697331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.583 [2024-11-20 10:48:46.697345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.583 qpair failed and we were unable to recover it. 00:31:14.583 [2024-11-20 10:48:46.707247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.583 [2024-11-20 10:48:46.707294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.583 [2024-11-20 10:48:46.707308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.583 [2024-11-20 10:48:46.707315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.583 [2024-11-20 10:48:46.707322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.583 [2024-11-20 10:48:46.707336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.583 qpair failed and we were unable to recover it. 00:31:14.583 [2024-11-20 10:48:46.717293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.583 [2024-11-20 10:48:46.717350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.583 [2024-11-20 10:48:46.717364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.583 [2024-11-20 10:48:46.717371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.583 [2024-11-20 10:48:46.717381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.583 [2024-11-20 10:48:46.717396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.583 qpair failed and we were unable to recover it. 00:31:14.583 [2024-11-20 10:48:46.727393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.583 [2024-11-20 10:48:46.727455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.583 [2024-11-20 10:48:46.727469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.583 [2024-11-20 10:48:46.727476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.583 [2024-11-20 10:48:46.727483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.583 [2024-11-20 10:48:46.727497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.583 qpair failed and we were unable to recover it. 00:31:14.583 [2024-11-20 10:48:46.737383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.583 [2024-11-20 10:48:46.737429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.583 [2024-11-20 10:48:46.737443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.583 [2024-11-20 10:48:46.737450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.583 [2024-11-20 10:48:46.737457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.583 [2024-11-20 10:48:46.737471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.747402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.747450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.747464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.747471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.747478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.747492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.757414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.757491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.757504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.757512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.757519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.757533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.767461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.767505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.767519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.767526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.767532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.767546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.777496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.777544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.777558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.777565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.777572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.777587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.787503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.787546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.787559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.787566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.787573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.787586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.797484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.797528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.797541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.797548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.797555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.797569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.807519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.807566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.807583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.807590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.807596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.807610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.817553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.817598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.817611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.817619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.817625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.817640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.827597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.827648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.827661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.827669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.827675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.584 [2024-11-20 10:48:46.827689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.584 qpair failed and we were unable to recover it. 00:31:14.584 [2024-11-20 10:48:46.837499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.584 [2024-11-20 10:48:46.837573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.584 [2024-11-20 10:48:46.837587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.584 [2024-11-20 10:48:46.837594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.584 [2024-11-20 10:48:46.837601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.837615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.847634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.847681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.847694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.847702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.847712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.847725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.857690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.857739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.857752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.857760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.857766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.857780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.867680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.867728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.867741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.867748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.867755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.867769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.877733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.877774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.877787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.877794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.877801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.877815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.887753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.887796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.887810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.887817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.887823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.887837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.897778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.897826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.897839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.897847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.897853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.897867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.907817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.907866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.907891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.907901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.907908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.907927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.917846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.917891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.917906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.917914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.917921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.917936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.927868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.927914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.927928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.927935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.927942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.585 [2024-11-20 10:48:46.927956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.585 qpair failed and we were unable to recover it. 00:31:14.585 [2024-11-20 10:48:46.937799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.585 [2024-11-20 10:48:46.937847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.585 [2024-11-20 10:48:46.937865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.585 [2024-11-20 10:48:46.937872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.585 [2024-11-20 10:48:46.937879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17890c0 00:31:14.586 [2024-11-20 10:48:46.937893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.586 qpair failed and we were unable to recover it. 00:31:14.586 [2024-11-20 10:48:46.947938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.586 [2024-11-20 10:48:46.948033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.586 [2024-11-20 10:48:46.948098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.586 [2024-11-20 10:48:46.948124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.586 [2024-11-20 10:48:46.948145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2388000b90 00:31:14.586 [2024-11-20 10:48:46.948218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.586 qpair failed and we were unable to recover it. 00:31:14.847 [2024-11-20 10:48:46.957956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.847 [2024-11-20 10:48:46.958042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.847 [2024-11-20 10:48:46.958093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.847 [2024-11-20 10:48:46.958115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.848 [2024-11-20 10:48:46.958132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2388000b90 00:31:14.848 [2024-11-20 10:48:46.958186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:14.848 qpair failed and we were unable to recover it. 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Read completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 Write completed with error (sct=0, sc=8) 00:31:14.848 starting I/O failed 00:31:14.848 [2024-11-20 10:48:46.959081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:14.848 [2024-11-20 10:48:46.967971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.848 [2024-11-20 10:48:46.968073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.848 [2024-11-20 10:48:46.968136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.848 [2024-11-20 10:48:46.968172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.848 [2024-11-20 10:48:46.968194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2390000b90 00:31:14.848 [2024-11-20 10:48:46.968250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:14.848 qpair failed and we were unable to recover it. 00:31:14.848 [2024-11-20 10:48:46.978023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.848 [2024-11-20 10:48:46.978132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.848 [2024-11-20 10:48:46.978169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.848 [2024-11-20 10:48:46.978185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.848 [2024-11-20 10:48:46.978199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2390000b90 00:31:14.848 [2024-11-20 10:48:46.978231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:14.848 qpair failed and we were unable to recover it. 00:31:14.848 [2024-11-20 10:48:46.978395] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:14.848 A controller has encountered a failure and is being reset. 00:31:14.848 [2024-11-20 10:48:46.978504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ee00 (9): Bad file descriptor 00:31:14.848 Controller properly reset. 00:31:14.848 Initializing NVMe Controllers 00:31:14.848 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:14.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:14.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:14.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:14.848 Initialization complete. Launching workers. 00:31:14.848 Starting thread on core 1 00:31:14.848 Starting thread on core 2 00:31:14.848 Starting thread on core 3 00:31:14.848 Starting thread on core 0 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:14.848 00:31:14.848 real 0m11.531s 00:31:14.848 user 0m22.060s 00:31:14.848 sys 0m4.026s 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:14.848 ************************************ 00:31:14.848 END TEST nvmf_target_disconnect_tc2 00:31:14.848 ************************************ 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.848 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.848 rmmod nvme_tcp 00:31:14.848 rmmod nvme_fabrics 00:31:14.848 rmmod nvme_keyring 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2245131 ']' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2245131 ']' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245131' 00:31:15.110 killing process with pid 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2245131 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.110 10:48:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.657 00:31:17.657 real 0m21.975s 00:31:17.657 user 0m50.356s 00:31:17.657 sys 0m10.182s 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.657 ************************************ 00:31:17.657 END TEST nvmf_target_disconnect 00:31:17.657 ************************************ 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:17.657 00:31:17.657 real 6m31.619s 00:31:17.657 user 11m22.898s 00:31:17.657 sys 2m15.916s 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.657 10:48:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.657 ************************************ 00:31:17.657 END TEST nvmf_host 00:31:17.657 ************************************ 00:31:17.657 10:48:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:17.657 10:48:49 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:17.657 10:48:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:17.657 10:48:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.657 10:48:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.657 10:48:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.657 ************************************ 00:31:17.657 START TEST nvmf_target_core_interrupt_mode 00:31:17.657 ************************************ 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:17.658 * Looking for test storage... 00:31:17.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:17.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.658 --rc genhtml_branch_coverage=1 00:31:17.658 --rc genhtml_function_coverage=1 00:31:17.658 --rc genhtml_legend=1 00:31:17.658 --rc geninfo_all_blocks=1 00:31:17.658 --rc geninfo_unexecuted_blocks=1 00:31:17.658 00:31:17.658 ' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:17.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.658 --rc genhtml_branch_coverage=1 00:31:17.658 --rc genhtml_function_coverage=1 00:31:17.658 --rc genhtml_legend=1 00:31:17.658 --rc geninfo_all_blocks=1 00:31:17.658 --rc geninfo_unexecuted_blocks=1 00:31:17.658 00:31:17.658 ' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:17.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.658 --rc genhtml_branch_coverage=1 00:31:17.658 --rc genhtml_function_coverage=1 00:31:17.658 --rc genhtml_legend=1 00:31:17.658 --rc geninfo_all_blocks=1 00:31:17.658 --rc geninfo_unexecuted_blocks=1 00:31:17.658 00:31:17.658 ' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:17.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.658 --rc genhtml_branch_coverage=1 00:31:17.658 --rc genhtml_function_coverage=1 00:31:17.658 --rc genhtml_legend=1 00:31:17.658 --rc geninfo_all_blocks=1 00:31:17.658 --rc geninfo_unexecuted_blocks=1 00:31:17.658 00:31:17.658 ' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:17.658 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.659 ************************************ 00:31:17.659 START TEST nvmf_abort 00:31:17.659 ************************************ 00:31:17.659 10:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:17.659 * Looking for test storage... 00:31:17.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.659 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.659 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.659 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:17.922 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:17.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.923 --rc genhtml_branch_coverage=1 00:31:17.923 --rc genhtml_function_coverage=1 00:31:17.923 --rc genhtml_legend=1 00:31:17.923 --rc geninfo_all_blocks=1 00:31:17.923 --rc geninfo_unexecuted_blocks=1 00:31:17.923 00:31:17.923 ' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:17.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.923 --rc genhtml_branch_coverage=1 00:31:17.923 --rc genhtml_function_coverage=1 00:31:17.923 --rc genhtml_legend=1 00:31:17.923 --rc geninfo_all_blocks=1 00:31:17.923 --rc geninfo_unexecuted_blocks=1 00:31:17.923 00:31:17.923 ' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:17.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.923 --rc genhtml_branch_coverage=1 00:31:17.923 --rc genhtml_function_coverage=1 00:31:17.923 --rc genhtml_legend=1 00:31:17.923 --rc geninfo_all_blocks=1 00:31:17.923 --rc geninfo_unexecuted_blocks=1 00:31:17.923 00:31:17.923 ' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:17.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.923 --rc genhtml_branch_coverage=1 00:31:17.923 --rc genhtml_function_coverage=1 00:31:17.923 --rc genhtml_legend=1 00:31:17.923 --rc geninfo_all_blocks=1 00:31:17.923 --rc geninfo_unexecuted_blocks=1 00:31:17.923 00:31:17.923 ' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.923 10:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:26.073 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:26.073 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.073 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:26.074 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:26.074 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:31:26.074 00:31:26.074 --- 10.0.0.2 ping statistics --- 00:31:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.074 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:31:26.074 00:31:26.074 --- 10.0.0.1 ping statistics --- 00:31:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.074 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2250772 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2250772 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2250772 ']' 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.074 10:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.074 [2024-11-20 10:48:57.660434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.074 [2024-11-20 10:48:57.661553] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:31:26.074 [2024-11-20 10:48:57.661604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.074 [2024-11-20 10:48:57.761854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:26.074 [2024-11-20 10:48:57.813358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.074 [2024-11-20 10:48:57.813406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.074 [2024-11-20 10:48:57.813415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.074 [2024-11-20 10:48:57.813422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.074 [2024-11-20 10:48:57.813428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.074 [2024-11-20 10:48:57.815253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.074 [2024-11-20 10:48:57.815562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.074 [2024-11-20 10:48:57.815564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.074 [2024-11-20 10:48:57.891765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:26.074 [2024-11-20 10:48:57.892787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:26.074 [2024-11-20 10:48:57.893357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.074 [2024-11-20 10:48:57.893497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.336 [2024-11-20 10:48:58.528630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.336 Malloc0 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.336 Delay0 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.336 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.337 [2024-11-20 10:48:58.632562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.337 10:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:26.598 [2024-11-20 10:48:58.775389] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:28.590 Initializing NVMe Controllers 00:31:28.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:28.590 controller IO queue size 128 less than required 00:31:28.590 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:28.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:28.590 Initialization complete. Launching workers. 00:31:28.590 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28295 00:31:28.590 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28352, failed to submit 66 00:31:28.590 success 28295, unsuccessful 57, failed 0 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:28.590 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:28.590 rmmod nvme_tcp 00:31:28.853 rmmod nvme_fabrics 00:31:28.853 rmmod nvme_keyring 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2250772 ']' 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2250772 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2250772 ']' 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2250772 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2250772 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:28.853 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:28.854 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2250772' 00:31:28.854 killing process with pid 2250772 00:31:28.854 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2250772 00:31:28.854 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2250772 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.115 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.027 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.027 00:31:31.027 real 0m13.449s 00:31:31.027 user 0m11.219s 00:31:31.027 sys 0m6.988s 00:31:31.027 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.027 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:31.027 ************************************ 00:31:31.027 END TEST nvmf_abort 00:31:31.027 ************************************ 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:31.289 ************************************ 00:31:31.289 START TEST nvmf_ns_hotplug_stress 00:31:31.289 ************************************ 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:31.289 * Looking for test storage... 00:31:31.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.289 --rc genhtml_branch_coverage=1 00:31:31.289 --rc genhtml_function_coverage=1 00:31:31.289 --rc genhtml_legend=1 00:31:31.289 --rc geninfo_all_blocks=1 00:31:31.289 --rc geninfo_unexecuted_blocks=1 00:31:31.289 00:31:31.289 ' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.289 --rc genhtml_branch_coverage=1 00:31:31.289 --rc genhtml_function_coverage=1 00:31:31.289 --rc genhtml_legend=1 00:31:31.289 --rc geninfo_all_blocks=1 00:31:31.289 --rc geninfo_unexecuted_blocks=1 00:31:31.289 00:31:31.289 ' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.289 --rc genhtml_branch_coverage=1 00:31:31.289 --rc genhtml_function_coverage=1 00:31:31.289 --rc genhtml_legend=1 00:31:31.289 --rc geninfo_all_blocks=1 00:31:31.289 --rc geninfo_unexecuted_blocks=1 00:31:31.289 00:31:31.289 ' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.289 --rc genhtml_branch_coverage=1 00:31:31.289 --rc genhtml_function_coverage=1 00:31:31.289 --rc genhtml_legend=1 00:31:31.289 --rc geninfo_all_blocks=1 00:31:31.289 --rc geninfo_unexecuted_blocks=1 00:31:31.289 00:31:31.289 ' 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.289 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.551 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.695 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:39.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:39.696 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:39.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:39.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.696 10:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.696 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:31:39.696 00:31:39.696 --- 10.0.0.2 ping statistics --- 00:31:39.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.696 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:31:39.697 00:31:39.697 --- 10.0.0.1 ping statistics --- 00:31:39.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.697 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2255481 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2255481 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2255481 ']' 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.697 10:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:39.697 [2024-11-20 10:49:11.259843] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.697 [2024-11-20 10:49:11.260989] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:31:39.697 [2024-11-20 10:49:11.261043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.697 [2024-11-20 10:49:11.359532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:39.697 [2024-11-20 10:49:11.411478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.697 [2024-11-20 10:49:11.411530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.697 [2024-11-20 10:49:11.411538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.697 [2024-11-20 10:49:11.411545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.697 [2024-11-20 10:49:11.411552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.697 [2024-11-20 10:49:11.413267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.697 [2024-11-20 10:49:11.413461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.697 [2024-11-20 10:49:11.413461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.697 [2024-11-20 10:49:11.489819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.697 [2024-11-20 10:49:11.490719] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:39.697 [2024-11-20 10:49:11.491359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.697 [2024-11-20 10:49:11.491448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:39.959 [2024-11-20 10:49:12.282398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.959 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:40.220 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.482 [2024-11-20 10:49:12.663127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.482 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:40.743 10:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:40.743 Malloc0 00:31:40.743 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:41.004 Delay0 00:31:41.004 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.266 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:41.266 NULL1 00:31:41.266 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:41.527 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2256152 00:31:41.527 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:41.527 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:41.527 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.788 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.049 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:42.049 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:42.049 true 00:31:42.310 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:42.310 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.310 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.572 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:42.572 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:42.834 true 00:31:42.834 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:42.834 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.095 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.095 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:43.095 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:43.356 true 00:31:43.356 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:43.356 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.618 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.878 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:43.878 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:43.878 true 00:31:44.140 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:44.140 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.140 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.400 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:44.400 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:44.661 true 00:31:44.661 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:44.661 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.661 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.921 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:44.921 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:45.182 true 00:31:45.182 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:45.182 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.182 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.442 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:45.442 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:45.702 true 00:31:45.703 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:45.703 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.963 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.963 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:45.963 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:46.223 true 00:31:46.223 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:46.223 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.483 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.483 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:46.483 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:46.744 true 00:31:46.744 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:46.744 10:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.004 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.004 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:47.004 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:47.264 true 00:31:47.264 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:47.264 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.524 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.785 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:47.785 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:47.785 true 00:31:47.785 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:47.785 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.046 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.308 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:48.308 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:48.308 true 00:31:48.308 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:48.308 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.569 10:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.834 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:48.834 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:49.094 true 00:31:49.094 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:49.094 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.094 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.355 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:49.355 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:49.615 true 00:31:49.615 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:49.615 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.876 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.876 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:49.876 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:50.138 true 00:31:50.138 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:50.138 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.399 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.399 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:50.399 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:50.660 true 00:31:50.660 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:50.660 10:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.921 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.921 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:50.921 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:51.181 true 00:31:51.181 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:51.181 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.442 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.702 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:51.702 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:51.702 true 00:31:51.702 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:51.702 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.964 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.225 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:52.225 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:52.225 true 00:31:52.225 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:52.225 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.487 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.748 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:52.748 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:52.748 true 00:31:52.748 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:52.748 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.009 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.270 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:53.270 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:53.270 true 00:31:53.270 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:53.270 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.531 10:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.792 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:53.792 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:54.053 true 00:31:54.053 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:54.053 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.053 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.315 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:54.315 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:54.579 true 00:31:54.579 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:54.579 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.579 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.840 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:54.841 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:55.102 true 00:31:55.102 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:55.102 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.362 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.362 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:55.362 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:55.623 true 00:31:55.623 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:55.623 10:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.884 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.884 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:55.884 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:56.145 true 00:31:56.145 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:56.145 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.406 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.406 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:56.406 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:56.667 true 00:31:56.667 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:56.667 10:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.929 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.190 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:57.190 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:57.190 true 00:31:57.190 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:57.190 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.451 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.711 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:57.711 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:57.711 true 00:31:57.711 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:57.711 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.972 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.233 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:58.233 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:58.233 true 00:31:58.233 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:58.233 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.493 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.754 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:58.754 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:58.754 true 00:31:59.013 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:59.014 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.014 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.274 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:59.274 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:59.534 true 00:31:59.534 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:31:59.534 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.534 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.793 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:59.793 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:00.053 true 00:32:00.053 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:00.053 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.313 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.313 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:00.313 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:00.573 true 00:32:00.573 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:00.573 10:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.833 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.833 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:00.833 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:01.093 true 00:32:01.093 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:01.093 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.353 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:01.613 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:01.613 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:01.613 true 00:32:01.613 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:01.613 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.873 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.134 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:02.134 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:02.134 true 00:32:02.134 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:02.134 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.393 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.652 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:02.652 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:02.652 true 00:32:02.911 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:02.911 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.911 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.170 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:32:03.170 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:32:03.429 true 00:32:03.429 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:03.429 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.429 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.689 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:32:03.689 10:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:32:03.950 true 00:32:03.950 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:03.950 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.209 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.209 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:32:04.209 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:32:04.469 true 00:32:04.469 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:04.469 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.729 10:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.729 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:32:04.729 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:32:04.989 true 00:32:04.989 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:04.989 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.249 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.510 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:32:05.510 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:32:05.510 true 00:32:05.510 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:05.510 10:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.770 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.030 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:32:06.030 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:32:06.030 true 00:32:06.030 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:06.030 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.290 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.550 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:32:06.550 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:32:06.550 true 00:32:06.550 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:06.550 10:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.811 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.070 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:32:07.070 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:32:07.331 true 00:32:07.331 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:07.331 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.331 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.592 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:32:07.593 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:32:07.854 true 00:32:07.854 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:07.854 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.854 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.115 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:32:08.115 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:32:08.375 true 00:32:08.375 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:08.375 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.636 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.636 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:32:08.636 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:32:08.897 true 00:32:08.897 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:08.897 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.157 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.157 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:32:09.157 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:32:09.417 true 00:32:09.417 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:09.417 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.678 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.938 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:32:09.938 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:32:09.938 true 00:32:09.938 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:09.938 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.199 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.460 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:32:10.460 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:32:10.460 true 00:32:10.460 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:10.460 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.721 10:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.983 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:32:10.983 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:32:10.983 true 00:32:10.983 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:10.983 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.243 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.504 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:32:11.504 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:32:11.504 true 00:32:11.764 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:11.764 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.764 Initializing NVMe Controllers 00:32:11.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:11.764 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:32:11.764 Controller IO queue size 128, less than required. 00:32:11.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:11.764 WARNING: Some requested NVMe devices were skipped 00:32:11.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:11.764 Initialization complete. Launching workers. 00:32:11.764 ======================================================== 00:32:11.764 Latency(us) 00:32:11.764 Device Information : IOPS MiB/s Average min max 00:32:11.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30401.43 14.84 4210.21 1113.45 11477.86 00:32:11.764 ======================================================== 00:32:11.764 Total : 30401.43 14.84 4210.21 1113.45 11477.86 00:32:11.764 00:32:11.764 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.024 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:32:12.024 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:32:12.286 true 00:32:12.286 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2256152 00:32:12.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2256152) - No such process 00:32:12.286 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2256152 00:32:12.286 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.286 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:12.546 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:12.546 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:12.546 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:12.546 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:12.546 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:12.808 null0 00:32:12.808 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:12.808 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:12.808 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:12.808 null1 00:32:12.808 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:12.808 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:12.808 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:13.070 null2 00:32:13.070 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.070 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.070 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:13.331 null3 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:13.331 null4 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.331 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:13.592 null5 00:32:13.592 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.592 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.592 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:13.854 null6 00:32:13.854 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.854 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.854 10:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:13.854 null7 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.854 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2262329 2262330 2262333 2262334 2262336 2262338 2262340 2262342 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:13.855 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:14.117 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:14.453 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:14.454 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:14.760 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.027 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:15.296 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:15.556 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:15.556 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:15.557 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.818 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:15.818 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:15.818 10:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.818 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.079 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.341 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.601 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.601 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.601 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.601 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.602 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.862 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.127 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.128 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.391 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.652 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.913 rmmod nvme_tcp 00:32:17.913 rmmod nvme_fabrics 00:32:17.913 rmmod nvme_keyring 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2255481 ']' 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2255481 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2255481 ']' 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2255481 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255481 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255481' 00:32:17.913 killing process with pid 2255481 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2255481 00:32:17.913 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2255481 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.175 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.719 00:32:20.719 real 0m49.029s 00:32:20.719 user 3m3.282s 00:32:20.719 sys 0m22.395s 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:20.719 ************************************ 00:32:20.719 END TEST nvmf_ns_hotplug_stress 00:32:20.719 ************************************ 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.719 ************************************ 00:32:20.719 START TEST nvmf_delete_subsystem 00:32:20.719 ************************************ 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:20.719 * Looking for test storage... 00:32:20.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.719 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.720 --rc genhtml_branch_coverage=1 00:32:20.720 --rc genhtml_function_coverage=1 00:32:20.720 --rc genhtml_legend=1 00:32:20.720 --rc geninfo_all_blocks=1 00:32:20.720 --rc geninfo_unexecuted_blocks=1 00:32:20.720 00:32:20.720 ' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.720 --rc genhtml_branch_coverage=1 00:32:20.720 --rc genhtml_function_coverage=1 00:32:20.720 --rc genhtml_legend=1 00:32:20.720 --rc geninfo_all_blocks=1 00:32:20.720 --rc geninfo_unexecuted_blocks=1 00:32:20.720 00:32:20.720 ' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.720 --rc genhtml_branch_coverage=1 00:32:20.720 --rc genhtml_function_coverage=1 00:32:20.720 --rc genhtml_legend=1 00:32:20.720 --rc geninfo_all_blocks=1 00:32:20.720 --rc geninfo_unexecuted_blocks=1 00:32:20.720 00:32:20.720 ' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:20.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.720 --rc genhtml_branch_coverage=1 00:32:20.720 --rc genhtml_function_coverage=1 00:32:20.720 --rc genhtml_legend=1 00:32:20.720 --rc geninfo_all_blocks=1 00:32:20.720 --rc geninfo_unexecuted_blocks=1 00:32:20.720 00:32:20.720 ' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.720 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.721 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.859 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:28.860 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:28.860 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:28.860 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:28.860 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.860 10:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:32:28.860 00:32:28.860 --- 10.0.0.2 ping statistics --- 00:32:28.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.860 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:28.860 00:32:28.860 --- 10.0.0.1 ping statistics --- 00:32:28.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.860 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2267484 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2267484 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2267484 ']' 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.860 10:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 [2024-11-20 10:50:00.318188] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:28.860 [2024-11-20 10:50:00.319317] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:32:28.860 [2024-11-20 10:50:00.319370] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.860 [2024-11-20 10:50:00.420137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:28.860 [2024-11-20 10:50:00.470989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.860 [2024-11-20 10:50:00.471039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.860 [2024-11-20 10:50:00.471047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.860 [2024-11-20 10:50:00.471055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.860 [2024-11-20 10:50:00.471061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.860 [2024-11-20 10:50:00.472667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.860 [2024-11-20 10:50:00.472670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.860 [2024-11-20 10:50:00.548979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:28.860 [2024-11-20 10:50:00.549508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:28.860 [2024-11-20 10:50:00.549826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 [2024-11-20 10:50:01.185738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.860 [2024-11-20 10:50:01.218155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.860 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:29.121 NULL1 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:29.121 Delay0 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2267760 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:29.121 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:29.121 [2024-11-20 10:50:01.341777] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:31.034 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.034 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.034 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Write completed with error (sct=0, sc=8) 00:32:31.296 starting I/O failed: -6 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.296 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 starting I/O failed: -6 00:32:31.297 starting I/O failed: -6 00:32:31.297 starting I/O failed: -6 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 starting I/O failed: -6 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 [2024-11-20 10:50:03.509883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4da400d490 is same with the state(6) to be set 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:31.297 Write completed with error (sct=0, sc=8) 00:32:31.297 Read completed with error (sct=0, sc=8) 00:32:32.240 [2024-11-20 10:50:04.481375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d99a0 is same with the state(6) to be set 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Write completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.240 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 [2024-11-20 10:50:04.509363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d84a0 is same with the state(6) to be set 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 [2024-11-20 10:50:04.509585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8860 is same with the state(6) to be set 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 [2024-11-20 10:50:04.510782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4da400d020 is same with the state(6) to be set 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Read completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 Write completed with error (sct=0, sc=8) 00:32:32.241 [2024-11-20 10:50:04.510895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4da400d7c0 is same with the state(6) to be set 00:32:32.241 Initializing NVMe Controllers 00:32:32.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.241 Controller IO queue size 128, less than required. 00:32:32.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:32.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:32.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:32.241 Initialization complete. Launching workers. 00:32:32.241 ======================================================== 00:32:32.241 Latency(us) 00:32:32.241 Device Information : IOPS MiB/s Average min max 00:32:32.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.24 0.09 910547.14 431.57 1007730.01 00:32:32.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.88 0.07 962258.47 313.56 2001420.45 00:32:32.241 ======================================================== 00:32:32.241 Total : 332.12 0.16 933883.14 313.56 2001420.45 00:32:32.241 00:32:32.241 [2024-11-20 10:50:04.511704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d99a0 (9): Bad file descriptor 00:32:32.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:32.241 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.241 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:32.241 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2267760 00:32:32.241 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2267760 00:32:32.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2267760) - No such process 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2267760 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2267760 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2267760 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.814 [2024-11-20 10:50:05.046024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2268499 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:32.814 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:32.814 [2024-11-20 10:50:05.148072] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:33.389 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:33.389 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:33.389 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:33.960 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:33.960 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:33.960 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:34.221 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:34.221 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:34.221 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:34.792 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:34.792 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:34.792 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:35.362 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:35.362 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:35.363 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:35.934 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:35.934 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:35.934 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:36.195 Initializing NVMe Controllers 00:32:36.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:36.195 Controller IO queue size 128, less than required. 00:32:36.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:36.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:36.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:36.195 Initialization complete. Launching workers. 00:32:36.195 ======================================================== 00:32:36.195 Latency(us) 00:32:36.195 Device Information : IOPS MiB/s Average min max 00:32:36.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003004.75 1000363.82 1041795.62 00:32:36.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004658.58 1000409.79 1011426.90 00:32:36.195 ======================================================== 00:32:36.195 Total : 256.00 0.12 1003831.67 1000363.82 1041795.62 00:32:36.195 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2268499 00:32:36.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2268499) - No such process 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2268499 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.455 rmmod nvme_tcp 00:32:36.455 rmmod nvme_fabrics 00:32:36.455 rmmod nvme_keyring 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2267484 ']' 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2267484 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2267484 ']' 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2267484 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267484 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267484' 00:32:36.455 killing process with pid 2267484 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2267484 00:32:36.455 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2267484 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.715 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.630 00:32:38.630 real 0m18.363s 00:32:38.630 user 0m26.551s 00:32:38.630 sys 0m7.549s 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:38.630 ************************************ 00:32:38.630 END TEST nvmf_delete_subsystem 00:32:38.630 ************************************ 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:38.630 ************************************ 00:32:38.630 START TEST nvmf_host_management 00:32:38.630 ************************************ 00:32:38.630 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:38.891 * Looking for test storage... 00:32:38.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.891 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:38.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.892 --rc genhtml_branch_coverage=1 00:32:38.892 --rc genhtml_function_coverage=1 00:32:38.892 --rc genhtml_legend=1 00:32:38.892 --rc geninfo_all_blocks=1 00:32:38.892 --rc geninfo_unexecuted_blocks=1 00:32:38.892 00:32:38.892 ' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:38.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.892 --rc genhtml_branch_coverage=1 00:32:38.892 --rc genhtml_function_coverage=1 00:32:38.892 --rc genhtml_legend=1 00:32:38.892 --rc geninfo_all_blocks=1 00:32:38.892 --rc geninfo_unexecuted_blocks=1 00:32:38.892 00:32:38.892 ' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:38.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.892 --rc genhtml_branch_coverage=1 00:32:38.892 --rc genhtml_function_coverage=1 00:32:38.892 --rc genhtml_legend=1 00:32:38.892 --rc geninfo_all_blocks=1 00:32:38.892 --rc geninfo_unexecuted_blocks=1 00:32:38.892 00:32:38.892 ' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:38.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.892 --rc genhtml_branch_coverage=1 00:32:38.892 --rc genhtml_function_coverage=1 00:32:38.892 --rc genhtml_legend=1 00:32:38.892 --rc geninfo_all_blocks=1 00:32:38.892 --rc geninfo_unexecuted_blocks=1 00:32:38.892 00:32:38.892 ' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.892 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.893 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:47.040 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:47.041 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:47.041 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:47.041 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:47.041 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.041 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:32:47.042 00:32:47.042 --- 10.0.0.2 ping statistics --- 00:32:47.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.042 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:32:47.042 00:32:47.042 --- 10.0.0.1 ping statistics --- 00:32:47.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.042 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2273188 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2273188 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2273188 ']' 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.042 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.042 [2024-11-20 10:50:18.786241] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.042 [2024-11-20 10:50:18.787388] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:32:47.042 [2024-11-20 10:50:18.787441] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.042 [2024-11-20 10:50:18.886719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.042 [2024-11-20 10:50:18.940319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.042 [2024-11-20 10:50:18.940371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.042 [2024-11-20 10:50:18.940380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.042 [2024-11-20 10:50:18.940387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.042 [2024-11-20 10:50:18.940394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.042 [2024-11-20 10:50:18.942642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.042 [2024-11-20 10:50:18.942814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.042 [2024-11-20 10:50:18.942974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:47.042 [2024-11-20 10:50:18.942974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.042 [2024-11-20 10:50:19.020485] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.042 [2024-11-20 10:50:19.021332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.042 [2024-11-20 10:50:19.021620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:47.042 [2024-11-20 10:50:19.022107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:47.042 [2024-11-20 10:50:19.022144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.304 [2024-11-20 10:50:19.643983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.304 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.573 Malloc0 00:32:47.573 [2024-11-20 10:50:19.740320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.573 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2273554 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2273554 /var/tmp/bdevperf.sock 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2273554 ']' 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:47.574 { 00:32:47.574 "params": { 00:32:47.574 "name": "Nvme$subsystem", 00:32:47.574 "trtype": "$TEST_TRANSPORT", 00:32:47.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.574 "adrfam": "ipv4", 00:32:47.574 "trsvcid": "$NVMF_PORT", 00:32:47.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.574 "hdgst": ${hdgst:-false}, 00:32:47.574 "ddgst": ${ddgst:-false} 00:32:47.574 }, 00:32:47.574 "method": "bdev_nvme_attach_controller" 00:32:47.574 } 00:32:47.574 EOF 00:32:47.574 )") 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:47.574 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:47.574 "params": { 00:32:47.574 "name": "Nvme0", 00:32:47.574 "trtype": "tcp", 00:32:47.574 "traddr": "10.0.0.2", 00:32:47.574 "adrfam": "ipv4", 00:32:47.574 "trsvcid": "4420", 00:32:47.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.574 "hdgst": false, 00:32:47.574 "ddgst": false 00:32:47.574 }, 00:32:47.574 "method": "bdev_nvme_attach_controller" 00:32:47.574 }' 00:32:47.574 [2024-11-20 10:50:19.850514] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:32:47.574 [2024-11-20 10:50:19.850591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273554 ] 00:32:47.837 [2024-11-20 10:50:19.946253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.837 [2024-11-20 10:50:19.999806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.099 Running I/O for 10 seconds... 00:32:48.360 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:48.361 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=462 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 462 -ge 100 ']' 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.625 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:48.625 [2024-11-20 10:50:20.747672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.625 [2024-11-20 10:50:20.747729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.625 [2024-11-20 10:50:20.747739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.747996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.748205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24312a0 is same with the state(6) to be set 00:32:48.626 [2024-11-20 10:50:20.752091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.626 [2024-11-20 10:50:20.752154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.626 [2024-11-20 10:50:20.752173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.626 [2024-11-20 10:50:20.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.626 [2024-11-20 10:50:20.752201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.626 [2024-11-20 10:50:20.752210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.626 [2024-11-20 10:50:20.752219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.626 [2024-11-20 10:50:20.752227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.626 [2024-11-20 10:50:20.752235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114c000 is same with the state(6) to be set 00:32:48.626 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.626 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:48.626 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.626 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:48.627 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.627 10:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:48.627 [2024-11-20 10:50:20.767400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114c000 (9): Bad file descriptor 00:32:48.627 [2024-11-20 10:50:20.767518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.767984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.767993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.768013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.768033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.768052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.627 [2024-11-20 10:50:20.768092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.627 [2024-11-20 10:50:20.768107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.768717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.628 [2024-11-20 10:50:20.768725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.628 [2024-11-20 10:50:20.770010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:48.628 task offset: 73216 on job bdev=Nvme0n1 fails 00:32:48.628 00:32:48.628 Latency(us) 00:32:48.628 [2024-11-20T09:50:21.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.629 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:48.629 Job: Nvme0n1 ended in about 0.44 seconds with error 00:32:48.629 Verification LBA range: start 0x0 length 0x400 00:32:48.629 Nvme0n1 : 0.44 1311.95 82.00 146.79 0.00 42592.90 1747.63 37792.43 00:32:48.629 [2024-11-20T09:50:21.005Z] =================================================================================================================== 00:32:48.629 [2024-11-20T09:50:21.005Z] Total : 1311.95 82.00 146.79 0.00 42592.90 1747.63 37792.43 00:32:48.629 [2024-11-20 10:50:20.772233] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:48.629 [2024-11-20 10:50:20.819750] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2273554 00:32:49.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2273554) - No such process 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:49.574 { 00:32:49.574 "params": { 00:32:49.574 "name": "Nvme$subsystem", 00:32:49.574 "trtype": "$TEST_TRANSPORT", 00:32:49.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.574 "adrfam": "ipv4", 00:32:49.574 "trsvcid": "$NVMF_PORT", 00:32:49.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.574 "hdgst": ${hdgst:-false}, 00:32:49.574 "ddgst": ${ddgst:-false} 00:32:49.574 }, 00:32:49.574 "method": "bdev_nvme_attach_controller" 00:32:49.574 } 00:32:49.574 EOF 00:32:49.574 )") 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:49.574 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:49.574 "params": { 00:32:49.574 "name": "Nvme0", 00:32:49.574 "trtype": "tcp", 00:32:49.574 "traddr": "10.0.0.2", 00:32:49.574 "adrfam": "ipv4", 00:32:49.574 "trsvcid": "4420", 00:32:49.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:49.574 "hdgst": false, 00:32:49.574 "ddgst": false 00:32:49.574 }, 00:32:49.574 "method": "bdev_nvme_attach_controller" 00:32:49.574 }' 00:32:49.574 [2024-11-20 10:50:21.830504] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:32:49.574 [2024-11-20 10:50:21.830580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273908 ] 00:32:49.574 [2024-11-20 10:50:21.924270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.834 [2024-11-20 10:50:21.976782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.094 Running I/O for 1 seconds... 00:32:51.038 2145.00 IOPS, 134.06 MiB/s 00:32:51.038 Latency(us) 00:32:51.038 [2024-11-20T09:50:23.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:51.038 Verification LBA range: start 0x0 length 0x400 00:32:51.038 Nvme0n1 : 1.01 2179.57 136.22 0.00 0.00 28695.84 2689.71 28835.84 00:32:51.038 [2024-11-20T09:50:23.414Z] =================================================================================================================== 00:32:51.038 [2024-11-20T09:50:23.414Z] Total : 2179.57 136.22 0.00 0.00 28695.84 2689.71 28835.84 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.298 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.298 rmmod nvme_tcp 00:32:51.298 rmmod nvme_fabrics 00:32:51.298 rmmod nvme_keyring 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2273188 ']' 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2273188 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2273188 ']' 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2273188 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273188 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273188' 00:32:51.299 killing process with pid 2273188 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2273188 00:32:51.299 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2273188 00:32:51.299 [2024-11-20 10:50:23.667616] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.559 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.471 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.471 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:53.471 00:32:53.471 real 0m14.776s 00:32:53.471 user 0m20.071s 00:32:53.471 sys 0m7.400s 00:32:53.471 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.471 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:53.471 ************************************ 00:32:53.472 END TEST nvmf_host_management 00:32:53.472 ************************************ 00:32:53.472 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:53.472 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:53.472 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.472 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:53.733 ************************************ 00:32:53.733 START TEST nvmf_lvol 00:32:53.733 ************************************ 00:32:53.733 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:53.733 * Looking for test storage... 00:32:53.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:53.733 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.733 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.733 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.733 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:53.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.734 --rc genhtml_branch_coverage=1 00:32:53.734 --rc genhtml_function_coverage=1 00:32:53.734 --rc genhtml_legend=1 00:32:53.734 --rc geninfo_all_blocks=1 00:32:53.734 --rc geninfo_unexecuted_blocks=1 00:32:53.734 00:32:53.734 ' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:53.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.734 --rc genhtml_branch_coverage=1 00:32:53.734 --rc genhtml_function_coverage=1 00:32:53.734 --rc genhtml_legend=1 00:32:53.734 --rc geninfo_all_blocks=1 00:32:53.734 --rc geninfo_unexecuted_blocks=1 00:32:53.734 00:32:53.734 ' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:53.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.734 --rc genhtml_branch_coverage=1 00:32:53.734 --rc genhtml_function_coverage=1 00:32:53.734 --rc genhtml_legend=1 00:32:53.734 --rc geninfo_all_blocks=1 00:32:53.734 --rc geninfo_unexecuted_blocks=1 00:32:53.734 00:32:53.734 ' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:53.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.734 --rc genhtml_branch_coverage=1 00:32:53.734 --rc genhtml_function_coverage=1 00:32:53.734 --rc genhtml_legend=1 00:32:53.734 --rc geninfo_all_blocks=1 00:32:53.734 --rc geninfo_unexecuted_blocks=1 00:32:53.734 00:32:53.734 ' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:53.734 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.735 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:01.875 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:01.875 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:01.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:01.875 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.875 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:33:01.876 00:33:01.876 --- 10.0.0.2 ping statistics --- 00:33:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.876 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:33:01.876 00:33:01.876 --- 10.0.0.1 ping statistics --- 00:33:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.876 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2278284 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2278284 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2278284 ']' 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.876 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:01.876 [2024-11-20 10:50:33.684155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.876 [2024-11-20 10:50:33.685282] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:33:01.876 [2024-11-20 10:50:33.685336] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.876 [2024-11-20 10:50:33.787878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:01.876 [2024-11-20 10:50:33.839915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.876 [2024-11-20 10:50:33.839970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.876 [2024-11-20 10:50:33.839979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.876 [2024-11-20 10:50:33.839986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.876 [2024-11-20 10:50:33.839992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.876 [2024-11-20 10:50:33.842039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.876 [2024-11-20 10:50:33.842218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.876 [2024-11-20 10:50:33.842272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.876 [2024-11-20 10:50:33.918875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.876 [2024-11-20 10:50:33.919854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:01.876 [2024-11-20 10:50:33.920454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:01.876 [2024-11-20 10:50:33.920563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:02.136 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.136 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:02.136 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.136 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.136 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:02.396 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.396 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:02.396 [2024-11-20 10:50:34.699301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.396 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:02.656 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:02.656 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:02.917 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:02.917 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:03.178 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:03.439 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=53f11f2e-e3eb-4c69-b800-1f84d1155e93 00:33:03.439 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 53f11f2e-e3eb-4c69-b800-1f84d1155e93 lvol 20 00:33:03.439 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7b401fc3-12c7-43fd-8927-869fa72b5eae 00:33:03.439 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:03.701 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b401fc3-12c7-43fd-8927-869fa72b5eae 00:33:03.962 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:03.962 [2024-11-20 10:50:36.251175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.962 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:04.222 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2278952 00:33:04.222 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:04.222 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:05.165 10:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7b401fc3-12c7-43fd-8927-869fa72b5eae MY_SNAPSHOT 00:33:05.426 10:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c2a0e3a9-1afb-47ea-8b85-b0289cc76b43 00:33:05.426 10:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7b401fc3-12c7-43fd-8927-869fa72b5eae 30 00:33:05.687 10:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c2a0e3a9-1afb-47ea-8b85-b0289cc76b43 MY_CLONE 00:33:05.947 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=10e3545a-3808-44e9-9f2d-d1a450e2c830 00:33:05.947 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 10e3545a-3808-44e9-9f2d-d1a450e2c830 00:33:06.518 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2278952 00:33:14.671 Initializing NVMe Controllers 00:33:14.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:14.671 Controller IO queue size 128, less than required. 00:33:14.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:14.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:14.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:14.671 Initialization complete. Launching workers. 00:33:14.671 ======================================================== 00:33:14.671 Latency(us) 00:33:14.671 Device Information : IOPS MiB/s Average min max 00:33:14.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15475.70 60.45 8271.59 2469.52 85093.46 00:33:14.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15327.10 59.87 8353.00 2942.90 66373.44 00:33:14.671 ======================================================== 00:33:14.671 Total : 30802.80 120.32 8312.10 2469.52 85093.46 00:33:14.671 00:33:14.671 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:14.996 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b401fc3-12c7-43fd-8927-869fa72b5eae 00:33:14.996 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53f11f2e-e3eb-4c69-b800-1f84d1155e93 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.282 rmmod nvme_tcp 00:33:15.282 rmmod nvme_fabrics 00:33:15.282 rmmod nvme_keyring 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2278284 ']' 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2278284 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2278284 ']' 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2278284 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278284 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.282 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278284' 00:33:15.282 killing process with pid 2278284 00:33:15.283 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2278284 00:33:15.283 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2278284 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.543 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.456 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.456 00:33:17.456 real 0m23.936s 00:33:17.456 user 0m56.108s 00:33:17.456 sys 0m10.839s 00:33:17.456 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.456 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:17.456 ************************************ 00:33:17.456 END TEST nvmf_lvol 00:33:17.456 ************************************ 00:33:17.718 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:17.718 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:17.718 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.718 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:17.718 ************************************ 00:33:17.718 START TEST nvmf_lvs_grow 00:33:17.718 ************************************ 00:33:17.718 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:17.718 * Looking for test storage... 00:33:17.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.719 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:17.719 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:17.719 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:17.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.719 --rc genhtml_branch_coverage=1 00:33:17.719 --rc genhtml_function_coverage=1 00:33:17.719 --rc genhtml_legend=1 00:33:17.719 --rc geninfo_all_blocks=1 00:33:17.719 --rc geninfo_unexecuted_blocks=1 00:33:17.719 00:33:17.719 ' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:17.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.719 --rc genhtml_branch_coverage=1 00:33:17.719 --rc genhtml_function_coverage=1 00:33:17.719 --rc genhtml_legend=1 00:33:17.719 --rc geninfo_all_blocks=1 00:33:17.719 --rc geninfo_unexecuted_blocks=1 00:33:17.719 00:33:17.719 ' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:17.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.719 --rc genhtml_branch_coverage=1 00:33:17.719 --rc genhtml_function_coverage=1 00:33:17.719 --rc genhtml_legend=1 00:33:17.719 --rc geninfo_all_blocks=1 00:33:17.719 --rc geninfo_unexecuted_blocks=1 00:33:17.719 00:33:17.719 ' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:17.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.719 --rc genhtml_branch_coverage=1 00:33:17.719 --rc genhtml_function_coverage=1 00:33:17.719 --rc genhtml_legend=1 00:33:17.719 --rc geninfo_all_blocks=1 00:33:17.719 --rc geninfo_unexecuted_blocks=1 00:33:17.719 00:33:17.719 ' 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.719 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.982 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.983 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.128 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:26.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:26.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:26.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:26.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:33:26.129 00:33:26.129 --- 10.0.0.2 ping statistics --- 00:33:26.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.129 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:33:26.129 00:33:26.129 --- 10.0.0.1 ping statistics --- 00:33:26.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.129 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:26.129 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2285220 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2285220 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2285220 ']' 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.130 10:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.130 [2024-11-20 10:50:57.694215] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:26.130 [2024-11-20 10:50:57.695346] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:33:26.130 [2024-11-20 10:50:57.695398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.130 [2024-11-20 10:50:57.793618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.130 [2024-11-20 10:50:57.847197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.130 [2024-11-20 10:50:57.847247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.130 [2024-11-20 10:50:57.847256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.130 [2024-11-20 10:50:57.847263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.130 [2024-11-20 10:50:57.847270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.130 [2024-11-20 10:50:57.848003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.130 [2024-11-20 10:50:57.923827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.130 [2024-11-20 10:50:57.924114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:26.391 [2024-11-20 10:50:58.712870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.391 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.652 ************************************ 00:33:26.652 START TEST lvs_grow_clean 00:33:26.652 ************************************ 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:26.652 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:26.652 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:26.652 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:26.911 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f1739588-9b40-4d95-b64a-c5be26a50821 00:33:26.911 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:26.911 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:27.172 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:27.172 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:27.172 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1739588-9b40-4d95-b64a-c5be26a50821 lvol 150 00:33:27.432 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7cfceff2-57ad-46f2-a2e9-2f035897ec15 00:33:27.432 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:27.432 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:27.432 [2024-11-20 10:50:59.744565] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:27.432 [2024-11-20 10:50:59.744735] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:27.432 true 00:33:27.432 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:27.432 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:27.693 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:27.693 10:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:27.954 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7cfceff2-57ad-46f2-a2e9-2f035897ec15 00:33:27.954 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:28.215 [2024-11-20 10:51:00.437233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.215 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2285714 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2285714 /var/tmp/bdevperf.sock 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2285714 ']' 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.477 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.477 [2024-11-20 10:51:00.675519] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:33:28.477 [2024-11-20 10:51:00.675596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285714 ] 00:33:28.477 [2024-11-20 10:51:00.771122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.477 [2024-11-20 10:51:00.825310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.423 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.423 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:29.423 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:29.423 Nvme0n1 00:33:29.423 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:29.684 [ 00:33:29.684 { 00:33:29.684 "name": "Nvme0n1", 00:33:29.684 "aliases": [ 00:33:29.684 "7cfceff2-57ad-46f2-a2e9-2f035897ec15" 00:33:29.684 ], 00:33:29.684 "product_name": "NVMe disk", 00:33:29.684 "block_size": 4096, 00:33:29.684 "num_blocks": 38912, 00:33:29.684 "uuid": "7cfceff2-57ad-46f2-a2e9-2f035897ec15", 00:33:29.684 "numa_id": 0, 00:33:29.684 "assigned_rate_limits": { 00:33:29.684 "rw_ios_per_sec": 0, 00:33:29.684 "rw_mbytes_per_sec": 0, 00:33:29.684 "r_mbytes_per_sec": 0, 00:33:29.684 "w_mbytes_per_sec": 0 00:33:29.684 }, 00:33:29.684 "claimed": false, 00:33:29.684 "zoned": false, 00:33:29.684 "supported_io_types": { 00:33:29.684 "read": true, 00:33:29.684 "write": true, 00:33:29.684 "unmap": true, 00:33:29.684 "flush": true, 00:33:29.684 "reset": true, 00:33:29.684 "nvme_admin": true, 00:33:29.684 "nvme_io": true, 00:33:29.684 "nvme_io_md": false, 00:33:29.684 "write_zeroes": true, 00:33:29.684 "zcopy": false, 00:33:29.684 "get_zone_info": false, 00:33:29.684 "zone_management": false, 00:33:29.684 "zone_append": false, 00:33:29.684 "compare": true, 00:33:29.684 "compare_and_write": true, 00:33:29.684 "abort": true, 00:33:29.684 "seek_hole": false, 00:33:29.684 "seek_data": false, 00:33:29.684 "copy": true, 00:33:29.684 "nvme_iov_md": false 00:33:29.684 }, 00:33:29.684 "memory_domains": [ 00:33:29.684 { 00:33:29.684 "dma_device_id": "system", 00:33:29.684 "dma_device_type": 1 00:33:29.684 } 00:33:29.684 ], 00:33:29.684 "driver_specific": { 00:33:29.684 "nvme": [ 00:33:29.684 { 00:33:29.684 "trid": { 00:33:29.684 "trtype": "TCP", 00:33:29.684 "adrfam": "IPv4", 00:33:29.684 "traddr": "10.0.0.2", 00:33:29.684 "trsvcid": "4420", 00:33:29.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:29.684 }, 00:33:29.684 "ctrlr_data": { 00:33:29.684 "cntlid": 1, 00:33:29.684 "vendor_id": "0x8086", 00:33:29.684 "model_number": "SPDK bdev Controller", 00:33:29.684 "serial_number": "SPDK0", 00:33:29.684 "firmware_revision": "25.01", 00:33:29.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.684 "oacs": { 00:33:29.684 "security": 0, 00:33:29.684 "format": 0, 00:33:29.684 "firmware": 0, 00:33:29.684 "ns_manage": 0 00:33:29.684 }, 00:33:29.684 "multi_ctrlr": true, 00:33:29.684 "ana_reporting": false 00:33:29.684 }, 00:33:29.684 "vs": { 00:33:29.684 "nvme_version": "1.3" 00:33:29.684 }, 00:33:29.684 "ns_data": { 00:33:29.684 "id": 1, 00:33:29.684 "can_share": true 00:33:29.684 } 00:33:29.684 } 00:33:29.684 ], 00:33:29.684 "mp_policy": "active_passive" 00:33:29.684 } 00:33:29.684 } 00:33:29.684 ] 00:33:29.684 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2286119 00:33:29.684 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:29.684 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:29.684 Running I/O for 10 seconds... 00:33:31.072 Latency(us) 00:33:31.072 [2024-11-20T09:51:03.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.072 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:31.072 [2024-11-20T09:51:03.448Z] =================================================================================================================== 00:33:31.072 [2024-11-20T09:51:03.448Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:33:31.072 00:33:31.644 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:31.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.905 Nvme0n1 : 2.00 17050.00 66.60 0.00 0.00 0.00 0.00 0.00 00:33:31.905 [2024-11-20T09:51:04.281Z] =================================================================================================================== 00:33:31.905 [2024-11-20T09:51:04.281Z] Total : 17050.00 66.60 0.00 0.00 0.00 0.00 0.00 00:33:31.905 00:33:31.905 true 00:33:31.905 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:31.905 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:32.166 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:32.166 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:32.166 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2286119 00:33:32.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.737 Nvme0n1 : 3.00 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:33:32.737 [2024-11-20T09:51:05.113Z] =================================================================================================================== 00:33:32.737 [2024-11-20T09:51:05.113Z] Total : 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:33:32.737 00:33:34.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.119 Nvme0n1 : 4.00 17478.50 68.28 0.00 0.00 0.00 0.00 0.00 00:33:34.119 [2024-11-20T09:51:06.495Z] =================================================================================================================== 00:33:34.119 [2024-11-20T09:51:06.495Z] Total : 17478.50 68.28 0.00 0.00 0.00 0.00 0.00 00:33:34.119 00:33:35.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.061 Nvme0n1 : 5.00 19062.80 74.46 0.00 0.00 0.00 0.00 0.00 00:33:35.061 [2024-11-20T09:51:07.437Z] =================================================================================================================== 00:33:35.061 [2024-11-20T09:51:07.437Z] Total : 19062.80 74.46 0.00 0.00 0.00 0.00 0.00 00:33:35.061 00:33:36.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.002 Nvme0n1 : 6.00 20120.00 78.59 0.00 0.00 0.00 0.00 0.00 00:33:36.002 [2024-11-20T09:51:08.378Z] =================================================================================================================== 00:33:36.002 [2024-11-20T09:51:08.378Z] Total : 20120.00 78.59 0.00 0.00 0.00 0.00 0.00 00:33:36.002 00:33:36.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.944 Nvme0n1 : 7.00 20876.71 81.55 0.00 0.00 0.00 0.00 0.00 00:33:36.944 [2024-11-20T09:51:09.320Z] =================================================================================================================== 00:33:36.944 [2024-11-20T09:51:09.320Z] Total : 20876.71 81.55 0.00 0.00 0.00 0.00 0.00 00:33:36.944 00:33:37.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.883 Nvme0n1 : 8.00 21442.12 83.76 0.00 0.00 0.00 0.00 0.00 00:33:37.883 [2024-11-20T09:51:10.259Z] =================================================================================================================== 00:33:37.883 [2024-11-20T09:51:10.259Z] Total : 21442.12 83.76 0.00 0.00 0.00 0.00 0.00 00:33:37.883 00:33:38.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.824 Nvme0n1 : 9.00 21881.89 85.48 0.00 0.00 0.00 0.00 0.00 00:33:38.824 [2024-11-20T09:51:11.200Z] =================================================================================================================== 00:33:38.824 [2024-11-20T09:51:11.200Z] Total : 21881.89 85.48 0.00 0.00 0.00 0.00 0.00 00:33:38.824 00:33:39.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.764 Nvme0n1 : 10.00 22240.10 86.88 0.00 0.00 0.00 0.00 0.00 00:33:39.764 [2024-11-20T09:51:12.140Z] =================================================================================================================== 00:33:39.764 [2024-11-20T09:51:12.140Z] Total : 22240.10 86.88 0.00 0.00 0.00 0.00 0.00 00:33:39.764 00:33:39.764 00:33:39.764 Latency(us) 00:33:39.764 [2024-11-20T09:51:12.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.764 Nvme0n1 : 10.00 22240.64 86.88 0.00 0.00 5752.02 2990.08 32768.00 00:33:39.764 [2024-11-20T09:51:12.140Z] =================================================================================================================== 00:33:39.764 [2024-11-20T09:51:12.140Z] Total : 22240.64 86.88 0.00 0.00 5752.02 2990.08 32768.00 00:33:39.764 { 00:33:39.764 "results": [ 00:33:39.764 { 00:33:39.764 "job": "Nvme0n1", 00:33:39.764 "core_mask": "0x2", 00:33:39.764 "workload": "randwrite", 00:33:39.764 "status": "finished", 00:33:39.764 "queue_depth": 128, 00:33:39.764 "io_size": 4096, 00:33:39.764 "runtime": 10.002634, 00:33:39.764 "iops": 22240.641814945942, 00:33:39.764 "mibps": 86.87750708963259, 00:33:39.764 "io_failed": 0, 00:33:39.764 "io_timeout": 0, 00:33:39.764 "avg_latency_us": 5752.018332771447, 00:33:39.764 "min_latency_us": 2990.08, 00:33:39.764 "max_latency_us": 32768.0 00:33:39.765 } 00:33:39.765 ], 00:33:39.765 "core_count": 1 00:33:39.765 } 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2285714 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2285714 ']' 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2285714 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.765 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285714 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285714' 00:33:40.025 killing process with pid 2285714 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2285714 00:33:40.025 Received shutdown signal, test time was about 10.000000 seconds 00:33:40.025 00:33:40.025 Latency(us) 00:33:40.025 [2024-11-20T09:51:12.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.025 [2024-11-20T09:51:12.401Z] =================================================================================================================== 00:33:40.025 [2024-11-20T09:51:12.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2285714 00:33:40.025 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:40.286 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.286 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:40.286 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:40.546 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:40.546 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:40.546 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:40.806 [2024-11-20 10:51:12.988641] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:40.806 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:41.066 request: 00:33:41.066 { 00:33:41.066 "uuid": "f1739588-9b40-4d95-b64a-c5be26a50821", 00:33:41.066 "method": "bdev_lvol_get_lvstores", 00:33:41.066 "req_id": 1 00:33:41.066 } 00:33:41.066 Got JSON-RPC error response 00:33:41.066 response: 00:33:41.066 { 00:33:41.066 "code": -19, 00:33:41.066 "message": "No such device" 00:33:41.066 } 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:41.066 aio_bdev 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7cfceff2-57ad-46f2-a2e9-2f035897ec15 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7cfceff2-57ad-46f2-a2e9-2f035897ec15 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:41.066 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:41.327 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7cfceff2-57ad-46f2-a2e9-2f035897ec15 -t 2000 00:33:41.587 [ 00:33:41.587 { 00:33:41.587 "name": "7cfceff2-57ad-46f2-a2e9-2f035897ec15", 00:33:41.587 "aliases": [ 00:33:41.587 "lvs/lvol" 00:33:41.587 ], 00:33:41.587 "product_name": "Logical Volume", 00:33:41.587 "block_size": 4096, 00:33:41.587 "num_blocks": 38912, 00:33:41.587 "uuid": "7cfceff2-57ad-46f2-a2e9-2f035897ec15", 00:33:41.587 "assigned_rate_limits": { 00:33:41.587 "rw_ios_per_sec": 0, 00:33:41.587 "rw_mbytes_per_sec": 0, 00:33:41.587 "r_mbytes_per_sec": 0, 00:33:41.587 "w_mbytes_per_sec": 0 00:33:41.587 }, 00:33:41.587 "claimed": false, 00:33:41.587 "zoned": false, 00:33:41.587 "supported_io_types": { 00:33:41.587 "read": true, 00:33:41.587 "write": true, 00:33:41.587 "unmap": true, 00:33:41.587 "flush": false, 00:33:41.587 "reset": true, 00:33:41.587 "nvme_admin": false, 00:33:41.587 "nvme_io": false, 00:33:41.587 "nvme_io_md": false, 00:33:41.587 "write_zeroes": true, 00:33:41.587 "zcopy": false, 00:33:41.587 "get_zone_info": false, 00:33:41.587 "zone_management": false, 00:33:41.587 "zone_append": false, 00:33:41.587 "compare": false, 00:33:41.587 "compare_and_write": false, 00:33:41.587 "abort": false, 00:33:41.587 "seek_hole": true, 00:33:41.587 "seek_data": true, 00:33:41.587 "copy": false, 00:33:41.587 "nvme_iov_md": false 00:33:41.587 }, 00:33:41.587 "driver_specific": { 00:33:41.587 "lvol": { 00:33:41.587 "lvol_store_uuid": "f1739588-9b40-4d95-b64a-c5be26a50821", 00:33:41.587 "base_bdev": "aio_bdev", 00:33:41.587 "thin_provision": false, 00:33:41.587 "num_allocated_clusters": 38, 00:33:41.587 "snapshot": false, 00:33:41.587 "clone": false, 00:33:41.587 "esnap_clone": false 00:33:41.587 } 00:33:41.587 } 00:33:41.587 } 00:33:41.587 ] 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:41.587 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:41.846 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:41.846 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7cfceff2-57ad-46f2-a2e9-2f035897ec15 00:33:42.106 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1739588-9b40-4d95-b64a-c5be26a50821 00:33:42.106 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.368 00:33:42.368 real 0m15.888s 00:33:42.368 user 0m15.572s 00:33:42.368 sys 0m1.464s 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.368 ************************************ 00:33:42.368 END TEST lvs_grow_clean 00:33:42.368 ************************************ 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.368 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:42.629 ************************************ 00:33:42.629 START TEST lvs_grow_dirty 00:33:42.629 ************************************ 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:42.629 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:42.891 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d859f1be-77ea-4924-a098-156b0affc879 00:33:42.891 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:42.891 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d859f1be-77ea-4924-a098-156b0affc879 lvol 150 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:43.152 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:43.413 [2024-11-20 10:51:15.640566] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:43.413 [2024-11-20 10:51:15.640734] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:43.413 true 00:33:43.413 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:43.413 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:43.674 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:43.674 10:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:43.674 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:43.935 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:44.196 [2024-11-20 10:51:16.313061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2289314 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2289314 /var/tmp/bdevperf.sock 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2289314 ']' 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:44.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.196 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:44.196 [2024-11-20 10:51:16.562494] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:33:44.196 [2024-11-20 10:51:16.562552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289314 ] 00:33:44.456 [2024-11-20 10:51:16.648178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.456 [2024-11-20 10:51:16.679291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.027 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.027 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:45.027 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:45.598 Nvme0n1 00:33:45.598 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:45.598 [ 00:33:45.598 { 00:33:45.598 "name": "Nvme0n1", 00:33:45.598 "aliases": [ 00:33:45.598 "99735b40-5f32-4f3e-912f-3f40ce57fc5d" 00:33:45.598 ], 00:33:45.598 "product_name": "NVMe disk", 00:33:45.598 "block_size": 4096, 00:33:45.598 "num_blocks": 38912, 00:33:45.598 "uuid": "99735b40-5f32-4f3e-912f-3f40ce57fc5d", 00:33:45.598 "numa_id": 0, 00:33:45.598 "assigned_rate_limits": { 00:33:45.598 "rw_ios_per_sec": 0, 00:33:45.598 "rw_mbytes_per_sec": 0, 00:33:45.598 "r_mbytes_per_sec": 0, 00:33:45.598 "w_mbytes_per_sec": 0 00:33:45.598 }, 00:33:45.598 "claimed": false, 00:33:45.598 "zoned": false, 00:33:45.598 "supported_io_types": { 00:33:45.598 "read": true, 00:33:45.598 "write": true, 00:33:45.598 "unmap": true, 00:33:45.598 "flush": true, 00:33:45.598 "reset": true, 00:33:45.598 "nvme_admin": true, 00:33:45.598 "nvme_io": true, 00:33:45.598 "nvme_io_md": false, 00:33:45.598 "write_zeroes": true, 00:33:45.598 "zcopy": false, 00:33:45.598 "get_zone_info": false, 00:33:45.598 "zone_management": false, 00:33:45.598 "zone_append": false, 00:33:45.598 "compare": true, 00:33:45.598 "compare_and_write": true, 00:33:45.598 "abort": true, 00:33:45.598 "seek_hole": false, 00:33:45.598 "seek_data": false, 00:33:45.598 "copy": true, 00:33:45.598 "nvme_iov_md": false 00:33:45.598 }, 00:33:45.598 "memory_domains": [ 00:33:45.598 { 00:33:45.598 "dma_device_id": "system", 00:33:45.598 "dma_device_type": 1 00:33:45.598 } 00:33:45.598 ], 00:33:45.598 "driver_specific": { 00:33:45.598 "nvme": [ 00:33:45.598 { 00:33:45.598 "trid": { 00:33:45.598 "trtype": "TCP", 00:33:45.598 "adrfam": "IPv4", 00:33:45.598 "traddr": "10.0.0.2", 00:33:45.598 "trsvcid": "4420", 00:33:45.598 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:45.598 }, 00:33:45.598 "ctrlr_data": { 00:33:45.598 "cntlid": 1, 00:33:45.598 "vendor_id": "0x8086", 00:33:45.598 "model_number": "SPDK bdev Controller", 00:33:45.598 "serial_number": "SPDK0", 00:33:45.598 "firmware_revision": "25.01", 00:33:45.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:45.598 "oacs": { 00:33:45.598 "security": 0, 00:33:45.598 "format": 0, 00:33:45.598 "firmware": 0, 00:33:45.598 "ns_manage": 0 00:33:45.598 }, 00:33:45.598 "multi_ctrlr": true, 00:33:45.598 "ana_reporting": false 00:33:45.598 }, 00:33:45.598 "vs": { 00:33:45.598 "nvme_version": "1.3" 00:33:45.598 }, 00:33:45.598 "ns_data": { 00:33:45.598 "id": 1, 00:33:45.598 "can_share": true 00:33:45.598 } 00:33:45.598 } 00:33:45.598 ], 00:33:45.598 "mp_policy": "active_passive" 00:33:45.598 } 00:33:45.598 } 00:33:45.598 ] 00:33:45.598 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2289650 00:33:45.598 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:45.598 10:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:45.598 Running I/O for 10 seconds... 00:33:46.592 Latency(us) 00:33:46.592 [2024-11-20T09:51:18.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.592 Nvme0n1 : 1.00 17480.00 68.28 0.00 0.00 0.00 0.00 0.00 00:33:46.592 [2024-11-20T09:51:18.968Z] =================================================================================================================== 00:33:46.592 [2024-11-20T09:51:18.968Z] Total : 17480.00 68.28 0.00 0.00 0.00 0.00 0.00 00:33:46.592 00:33:47.533 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d859f1be-77ea-4924-a098-156b0affc879 00:33:47.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.794 Nvme0n1 : 2.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:33:47.794 [2024-11-20T09:51:20.170Z] =================================================================================================================== 00:33:47.794 [2024-11-20T09:51:20.170Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:33:47.794 00:33:47.794 true 00:33:47.794 10:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:47.794 10:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:48.054 10:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:48.054 10:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:48.054 10:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2289650 00:33:48.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:48.626 Nvme0n1 : 3.00 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:33:48.626 [2024-11-20T09:51:21.002Z] =================================================================================================================== 00:33:48.626 [2024-11-20T09:51:21.002Z] Total : 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:33:48.626 00:33:50.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.010 Nvme0n1 : 4.00 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:33:50.010 [2024-11-20T09:51:22.386Z] =================================================================================================================== 00:33:50.010 [2024-11-20T09:51:22.386Z] Total : 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:33:50.010 00:33:50.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.952 Nvme0n1 : 5.00 18846.80 73.62 0.00 0.00 0.00 0.00 0.00 00:33:50.952 [2024-11-20T09:51:23.328Z] =================================================================================================================== 00:33:50.952 [2024-11-20T09:51:23.328Z] Total : 18846.80 73.62 0.00 0.00 0.00 0.00 0.00 00:33:50.952 00:33:51.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.895 Nvme0n1 : 6.00 19960.17 77.97 0.00 0.00 0.00 0.00 0.00 00:33:51.895 [2024-11-20T09:51:24.271Z] =================================================================================================================== 00:33:51.895 [2024-11-20T09:51:24.271Z] Total : 19960.17 77.97 0.00 0.00 0.00 0.00 0.00 00:33:51.895 00:33:52.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:52.837 Nvme0n1 : 7.00 20737.29 81.01 0.00 0.00 0.00 0.00 0.00 00:33:52.837 [2024-11-20T09:51:25.213Z] =================================================================================================================== 00:33:52.837 [2024-11-20T09:51:25.213Z] Total : 20737.29 81.01 0.00 0.00 0.00 0.00 0.00 00:33:52.837 00:33:53.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.778 Nvme0n1 : 8.00 21336.00 83.34 0.00 0.00 0.00 0.00 0.00 00:33:53.778 [2024-11-20T09:51:26.154Z] =================================================================================================================== 00:33:53.778 [2024-11-20T09:51:26.154Z] Total : 21336.00 83.34 0.00 0.00 0.00 0.00 0.00 00:33:53.778 00:33:54.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.721 Nvme0n1 : 9.00 21801.67 85.16 0.00 0.00 0.00 0.00 0.00 00:33:54.721 [2024-11-20T09:51:27.097Z] =================================================================================================================== 00:33:54.721 [2024-11-20T09:51:27.097Z] Total : 21801.67 85.16 0.00 0.00 0.00 0.00 0.00 00:33:54.721 00:33:55.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:55.662 Nvme0n1 : 10.00 22174.20 86.62 0.00 0.00 0.00 0.00 0.00 00:33:55.662 [2024-11-20T09:51:28.038Z] =================================================================================================================== 00:33:55.662 [2024-11-20T09:51:28.038Z] Total : 22174.20 86.62 0.00 0.00 0.00 0.00 0.00 00:33:55.662 00:33:55.662 00:33:55.662 Latency(us) 00:33:55.662 [2024-11-20T09:51:28.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:55.662 Nvme0n1 : 10.00 22180.14 86.64 0.00 0.00 5768.30 4532.91 31457.28 00:33:55.662 [2024-11-20T09:51:28.038Z] =================================================================================================================== 00:33:55.662 [2024-11-20T09:51:28.038Z] Total : 22180.14 86.64 0.00 0.00 5768.30 4532.91 31457.28 00:33:55.662 { 00:33:55.662 "results": [ 00:33:55.662 { 00:33:55.662 "job": "Nvme0n1", 00:33:55.662 "core_mask": "0x2", 00:33:55.663 "workload": "randwrite", 00:33:55.663 "status": "finished", 00:33:55.663 "queue_depth": 128, 00:33:55.663 "io_size": 4096, 00:33:55.663 "runtime": 10.003095, 00:33:55.663 "iops": 22180.1352481407, 00:33:55.663 "mibps": 86.64115331304961, 00:33:55.663 "io_failed": 0, 00:33:55.663 "io_timeout": 0, 00:33:55.663 "avg_latency_us": 5768.303798320338, 00:33:55.663 "min_latency_us": 4532.906666666667, 00:33:55.663 "max_latency_us": 31457.28 00:33:55.663 } 00:33:55.663 ], 00:33:55.663 "core_count": 1 00:33:55.663 } 00:33:55.663 10:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2289314 00:33:55.663 10:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2289314 ']' 00:33:55.663 10:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2289314 00:33:55.663 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:55.663 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.663 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289314 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289314' 00:33:55.924 killing process with pid 2289314 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2289314 00:33:55.924 Received shutdown signal, test time was about 10.000000 seconds 00:33:55.924 00:33:55.924 Latency(us) 00:33:55.924 [2024-11-20T09:51:28.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.924 [2024-11-20T09:51:28.300Z] =================================================================================================================== 00:33:55.924 [2024-11-20T09:51:28.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2289314 00:33:55.924 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:56.185 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.185 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:56.185 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2285220 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2285220 00:33:56.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2285220 Killed "${NVMF_APP[@]}" "$@" 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2291664 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2291664 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2291664 ']' 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.446 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:56.446 [2024-11-20 10:51:28.787285] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:56.446 [2024-11-20 10:51:28.788030] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:33:56.446 [2024-11-20 10:51:28.788066] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.707 [2024-11-20 10:51:28.867837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.707 [2024-11-20 10:51:28.896528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.707 [2024-11-20 10:51:28.896555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.707 [2024-11-20 10:51:28.896561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.707 [2024-11-20 10:51:28.896566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.707 [2024-11-20 10:51:28.896570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.707 [2024-11-20 10:51:28.897028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.707 [2024-11-20 10:51:28.946628] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:56.707 [2024-11-20 10:51:28.946821] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.279 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:57.540 [2024-11-20 10:51:29.803336] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:57.540 [2024-11-20 10:51:29.803597] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:57.540 [2024-11-20 10:51:29.803691] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:57.540 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:57.801 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99735b40-5f32-4f3e-912f-3f40ce57fc5d -t 2000 00:33:57.801 [ 00:33:57.801 { 00:33:57.801 "name": "99735b40-5f32-4f3e-912f-3f40ce57fc5d", 00:33:57.801 "aliases": [ 00:33:57.801 "lvs/lvol" 00:33:57.801 ], 00:33:57.801 "product_name": "Logical Volume", 00:33:57.801 "block_size": 4096, 00:33:57.801 "num_blocks": 38912, 00:33:57.801 "uuid": "99735b40-5f32-4f3e-912f-3f40ce57fc5d", 00:33:57.801 "assigned_rate_limits": { 00:33:57.801 "rw_ios_per_sec": 0, 00:33:57.801 "rw_mbytes_per_sec": 0, 00:33:57.801 "r_mbytes_per_sec": 0, 00:33:57.801 "w_mbytes_per_sec": 0 00:33:57.801 }, 00:33:57.801 "claimed": false, 00:33:57.801 "zoned": false, 00:33:57.801 "supported_io_types": { 00:33:57.801 "read": true, 00:33:57.801 "write": true, 00:33:57.801 "unmap": true, 00:33:57.801 "flush": false, 00:33:57.801 "reset": true, 00:33:57.801 "nvme_admin": false, 00:33:57.801 "nvme_io": false, 00:33:57.801 "nvme_io_md": false, 00:33:57.801 "write_zeroes": true, 00:33:57.801 "zcopy": false, 00:33:57.801 "get_zone_info": false, 00:33:57.801 "zone_management": false, 00:33:57.801 "zone_append": false, 00:33:57.801 "compare": false, 00:33:57.801 "compare_and_write": false, 00:33:57.801 "abort": false, 00:33:57.801 "seek_hole": true, 00:33:57.801 "seek_data": true, 00:33:57.801 "copy": false, 00:33:57.801 "nvme_iov_md": false 00:33:57.801 }, 00:33:57.801 "driver_specific": { 00:33:57.801 "lvol": { 00:33:57.801 "lvol_store_uuid": "d859f1be-77ea-4924-a098-156b0affc879", 00:33:57.801 "base_bdev": "aio_bdev", 00:33:57.801 "thin_provision": false, 00:33:57.801 "num_allocated_clusters": 38, 00:33:57.801 "snapshot": false, 00:33:57.801 "clone": false, 00:33:57.801 "esnap_clone": false 00:33:57.801 } 00:33:57.801 } 00:33:57.801 } 00:33:57.801 ] 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:58.062 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:58.327 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:58.327 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:58.588 [2024-11-20 10:51:30.705531] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:58.588 request: 00:33:58.588 { 00:33:58.588 "uuid": "d859f1be-77ea-4924-a098-156b0affc879", 00:33:58.588 "method": "bdev_lvol_get_lvstores", 00:33:58.588 "req_id": 1 00:33:58.588 } 00:33:58.588 Got JSON-RPC error response 00:33:58.588 response: 00:33:58.588 { 00:33:58.588 "code": -19, 00:33:58.588 "message": "No such device" 00:33:58.588 } 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:58.588 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:58.851 aio_bdev 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:58.851 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:59.112 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 99735b40-5f32-4f3e-912f-3f40ce57fc5d -t 2000 00:33:59.112 [ 00:33:59.112 { 00:33:59.112 "name": "99735b40-5f32-4f3e-912f-3f40ce57fc5d", 00:33:59.112 "aliases": [ 00:33:59.112 "lvs/lvol" 00:33:59.112 ], 00:33:59.112 "product_name": "Logical Volume", 00:33:59.112 "block_size": 4096, 00:33:59.112 "num_blocks": 38912, 00:33:59.112 "uuid": "99735b40-5f32-4f3e-912f-3f40ce57fc5d", 00:33:59.112 "assigned_rate_limits": { 00:33:59.112 "rw_ios_per_sec": 0, 00:33:59.112 "rw_mbytes_per_sec": 0, 00:33:59.112 "r_mbytes_per_sec": 0, 00:33:59.112 "w_mbytes_per_sec": 0 00:33:59.112 }, 00:33:59.112 "claimed": false, 00:33:59.112 "zoned": false, 00:33:59.112 "supported_io_types": { 00:33:59.112 "read": true, 00:33:59.112 "write": true, 00:33:59.112 "unmap": true, 00:33:59.112 "flush": false, 00:33:59.112 "reset": true, 00:33:59.112 "nvme_admin": false, 00:33:59.112 "nvme_io": false, 00:33:59.112 "nvme_io_md": false, 00:33:59.112 "write_zeroes": true, 00:33:59.112 "zcopy": false, 00:33:59.112 "get_zone_info": false, 00:33:59.112 "zone_management": false, 00:33:59.112 "zone_append": false, 00:33:59.112 "compare": false, 00:33:59.112 "compare_and_write": false, 00:33:59.112 "abort": false, 00:33:59.112 "seek_hole": true, 00:33:59.112 "seek_data": true, 00:33:59.112 "copy": false, 00:33:59.112 "nvme_iov_md": false 00:33:59.112 }, 00:33:59.112 "driver_specific": { 00:33:59.112 "lvol": { 00:33:59.112 "lvol_store_uuid": "d859f1be-77ea-4924-a098-156b0affc879", 00:33:59.112 "base_bdev": "aio_bdev", 00:33:59.112 "thin_provision": false, 00:33:59.112 "num_allocated_clusters": 38, 00:33:59.112 "snapshot": false, 00:33:59.112 "clone": false, 00:33:59.112 "esnap_clone": false 00:33:59.112 } 00:33:59.112 } 00:33:59.112 } 00:33:59.112 ] 00:33:59.112 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:59.112 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:59.112 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:59.373 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:59.373 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:59.373 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d859f1be-77ea-4924-a098-156b0affc879 00:33:59.634 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:59.634 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99735b40-5f32-4f3e-912f-3f40ce57fc5d 00:33:59.634 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d859f1be-77ea-4924-a098-156b0affc879 00:33:59.895 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:00.155 00:34:00.155 real 0m17.622s 00:34:00.155 user 0m35.453s 00:34:00.155 sys 0m3.155s 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:00.155 ************************************ 00:34:00.155 END TEST lvs_grow_dirty 00:34:00.155 ************************************ 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:00.155 nvmf_trace.0 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.155 rmmod nvme_tcp 00:34:00.155 rmmod nvme_fabrics 00:34:00.155 rmmod nvme_keyring 00:34:00.155 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.416 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:00.416 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:00.416 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2291664 ']' 00:34:00.416 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2291664 00:34:00.416 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2291664 ']' 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2291664 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291664 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291664' 00:34:00.417 killing process with pid 2291664 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2291664 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2291664 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.417 10:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:02.986 00:34:02.986 real 0m44.928s 00:34:02.986 user 0m53.964s 00:34:02.986 sys 0m10.830s 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:02.986 ************************************ 00:34:02.986 END TEST nvmf_lvs_grow 00:34:02.986 ************************************ 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:02.986 ************************************ 00:34:02.986 START TEST nvmf_bdev_io_wait 00:34:02.986 ************************************ 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:02.986 * Looking for test storage... 00:34:02.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:02.986 10:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.986 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:02.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.987 --rc genhtml_branch_coverage=1 00:34:02.987 --rc genhtml_function_coverage=1 00:34:02.987 --rc genhtml_legend=1 00:34:02.987 --rc geninfo_all_blocks=1 00:34:02.987 --rc geninfo_unexecuted_blocks=1 00:34:02.987 00:34:02.987 ' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:02.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.987 --rc genhtml_branch_coverage=1 00:34:02.987 --rc genhtml_function_coverage=1 00:34:02.987 --rc genhtml_legend=1 00:34:02.987 --rc geninfo_all_blocks=1 00:34:02.987 --rc geninfo_unexecuted_blocks=1 00:34:02.987 00:34:02.987 ' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:02.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.987 --rc genhtml_branch_coverage=1 00:34:02.987 --rc genhtml_function_coverage=1 00:34:02.987 --rc genhtml_legend=1 00:34:02.987 --rc geninfo_all_blocks=1 00:34:02.987 --rc geninfo_unexecuted_blocks=1 00:34:02.987 00:34:02.987 ' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:02.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.987 --rc genhtml_branch_coverage=1 00:34:02.987 --rc genhtml_function_coverage=1 00:34:02.987 --rc genhtml_legend=1 00:34:02.987 --rc geninfo_all_blocks=1 00:34:02.987 --rc geninfo_unexecuted_blocks=1 00:34:02.987 00:34:02.987 ' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:02.987 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:02.988 10:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.162 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:11.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:11.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:11.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:11.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:34:11.163 00:34:11.163 --- 10.0.0.2 ping statistics --- 00:34:11.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.163 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:11.163 00:34:11.163 --- 10.0.0.1 ping statistics --- 00:34:11.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.163 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.163 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2296536 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2296536 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2296536 ']' 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 [2024-11-20 10:51:42.472566] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:11.164 [2024-11-20 10:51:42.473874] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:11.164 [2024-11-20 10:51:42.473929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.164 [2024-11-20 10:51:42.548783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.164 [2024-11-20 10:51:42.597966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.164 [2024-11-20 10:51:42.598015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.164 [2024-11-20 10:51:42.598022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.164 [2024-11-20 10:51:42.598028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.164 [2024-11-20 10:51:42.598032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.164 [2024-11-20 10:51:42.600185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.164 [2024-11-20 10:51:42.600293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.164 [2024-11-20 10:51:42.600453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.164 [2024-11-20 10:51:42.600454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.164 [2024-11-20 10:51:42.600969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 [2024-11-20 10:51:42.786107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:11.164 [2024-11-20 10:51:42.786792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:11.164 [2024-11-20 10:51:42.786895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:11.164 [2024-11-20 10:51:42.787029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 [2024-11-20 10:51:42.797212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 Malloc0 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.164 [2024-11-20 10:51:42.869659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2296751 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2296753 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.164 { 00:34:11.164 "params": { 00:34:11.164 "name": "Nvme$subsystem", 00:34:11.164 "trtype": "$TEST_TRANSPORT", 00:34:11.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.164 "adrfam": "ipv4", 00:34:11.164 "trsvcid": "$NVMF_PORT", 00:34:11.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.164 "hdgst": ${hdgst:-false}, 00:34:11.164 "ddgst": ${ddgst:-false} 00:34:11.164 }, 00:34:11.164 "method": "bdev_nvme_attach_controller" 00:34:11.164 } 00:34:11.164 EOF 00:34:11.164 )") 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2296755 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.164 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.164 { 00:34:11.164 "params": { 00:34:11.164 "name": "Nvme$subsystem", 00:34:11.164 "trtype": "$TEST_TRANSPORT", 00:34:11.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "$NVMF_PORT", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.165 "hdgst": ${hdgst:-false}, 00:34:11.165 "ddgst": ${ddgst:-false} 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 } 00:34:11.165 EOF 00:34:11.165 )") 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2296758 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.165 { 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme$subsystem", 00:34:11.165 "trtype": "$TEST_TRANSPORT", 00:34:11.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "$NVMF_PORT", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.165 "hdgst": ${hdgst:-false}, 00:34:11.165 "ddgst": ${ddgst:-false} 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 } 00:34:11.165 EOF 00:34:11.165 )") 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.165 { 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme$subsystem", 00:34:11.165 "trtype": "$TEST_TRANSPORT", 00:34:11.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "$NVMF_PORT", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.165 "hdgst": ${hdgst:-false}, 00:34:11.165 "ddgst": ${ddgst:-false} 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 } 00:34:11.165 EOF 00:34:11.165 )") 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2296751 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme1", 00:34:11.165 "trtype": "tcp", 00:34:11.165 "traddr": "10.0.0.2", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "4420", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.165 "hdgst": false, 00:34:11.165 "ddgst": false 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 }' 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme1", 00:34:11.165 "trtype": "tcp", 00:34:11.165 "traddr": "10.0.0.2", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "4420", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.165 "hdgst": false, 00:34:11.165 "ddgst": false 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 }' 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme1", 00:34:11.165 "trtype": "tcp", 00:34:11.165 "traddr": "10.0.0.2", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "4420", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.165 "hdgst": false, 00:34:11.165 "ddgst": false 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 }' 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:11.165 10:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.165 "params": { 00:34:11.165 "name": "Nvme1", 00:34:11.165 "trtype": "tcp", 00:34:11.165 "traddr": "10.0.0.2", 00:34:11.165 "adrfam": "ipv4", 00:34:11.165 "trsvcid": "4420", 00:34:11.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.165 "hdgst": false, 00:34:11.165 "ddgst": false 00:34:11.165 }, 00:34:11.165 "method": "bdev_nvme_attach_controller" 00:34:11.165 }' 00:34:11.165 [2024-11-20 10:51:42.928186] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:11.165 [2024-11-20 10:51:42.928192] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:11.165 [2024-11-20 10:51:42.928260] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 10:51:42.928261] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:11.165 --proc-type=auto ] 00:34:11.165 [2024-11-20 10:51:42.930585] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:11.165 [2024-11-20 10:51:42.930640] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:11.165 [2024-11-20 10:51:42.933014] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:11.165 [2024-11-20 10:51:42.933090] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:11.165 [2024-11-20 10:51:43.154493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.165 [2024-11-20 10:51:43.194352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:11.165 [2024-11-20 10:51:43.242606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.165 [2024-11-20 10:51:43.282354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:11.165 [2024-11-20 10:51:43.338605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.165 [2024-11-20 10:51:43.376976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:11.165 [2024-11-20 10:51:43.405386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.165 [2024-11-20 10:51:43.444747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:11.427 Running I/O for 1 seconds... 00:34:11.427 Running I/O for 1 seconds... 00:34:11.427 Running I/O for 1 seconds... 00:34:11.427 Running I/O for 1 seconds... 00:34:12.371 7945.00 IOPS, 31.04 MiB/s 00:34:12.371 Latency(us) 00:34:12.371 [2024-11-20T09:51:44.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.371 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:12.371 Nvme1n1 : 1.02 7908.77 30.89 0.00 0.00 16023.84 4396.37 27962.03 00:34:12.371 [2024-11-20T09:51:44.747Z] =================================================================================================================== 00:34:12.371 [2024-11-20T09:51:44.747Z] Total : 7908.77 30.89 0.00 0.00 16023.84 4396.37 27962.03 00:34:12.371 11587.00 IOPS, 45.26 MiB/s [2024-11-20T09:51:44.747Z] 7386.00 IOPS, 28.85 MiB/s 00:34:12.371 Latency(us) 00:34:12.371 [2024-11-20T09:51:44.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.371 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:12.371 Nvme1n1 : 1.01 11627.42 45.42 0.00 0.00 10966.34 5543.25 16493.23 00:34:12.371 [2024-11-20T09:51:44.747Z] =================================================================================================================== 00:34:12.371 [2024-11-20T09:51:44.747Z] Total : 11627.42 45.42 0.00 0.00 10966.34 5543.25 16493.23 00:34:12.371 00:34:12.371 Latency(us) 00:34:12.371 [2024-11-20T09:51:44.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.371 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:12.371 Nvme1n1 : 1.01 7497.76 29.29 0.00 0.00 17026.06 4123.31 33204.91 00:34:12.371 [2024-11-20T09:51:44.747Z] =================================================================================================================== 00:34:12.371 [2024-11-20T09:51:44.747Z] Total : 7497.76 29.29 0.00 0.00 17026.06 4123.31 33204.91 00:34:12.371 181744.00 IOPS, 709.94 MiB/s 00:34:12.371 Latency(us) 00:34:12.371 [2024-11-20T09:51:44.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.371 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:12.371 Nvme1n1 : 1.00 181383.25 708.53 0.00 0.00 701.86 302.08 1979.73 00:34:12.371 [2024-11-20T09:51:44.747Z] =================================================================================================================== 00:34:12.371 [2024-11-20T09:51:44.747Z] Total : 181383.25 708.53 0.00 0.00 701.86 302.08 1979.73 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2296753 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2296755 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2296758 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:12.632 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.633 rmmod nvme_tcp 00:34:12.633 rmmod nvme_fabrics 00:34:12.633 rmmod nvme_keyring 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2296536 ']' 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2296536 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2296536 ']' 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2296536 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2296536 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2296536' 00:34:12.633 killing process with pid 2296536 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2296536 00:34:12.633 10:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2296536 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.894 10:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.441 00:34:15.441 real 0m12.311s 00:34:15.441 user 0m16.199s 00:34:15.441 sys 0m7.376s 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.441 ************************************ 00:34:15.441 END TEST nvmf_bdev_io_wait 00:34:15.441 ************************************ 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:15.441 ************************************ 00:34:15.441 START TEST nvmf_queue_depth 00:34:15.441 ************************************ 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:15.441 * Looking for test storage... 00:34:15.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.441 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:15.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.442 --rc genhtml_branch_coverage=1 00:34:15.442 --rc genhtml_function_coverage=1 00:34:15.442 --rc genhtml_legend=1 00:34:15.442 --rc geninfo_all_blocks=1 00:34:15.442 --rc geninfo_unexecuted_blocks=1 00:34:15.442 00:34:15.442 ' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.442 --rc genhtml_branch_coverage=1 00:34:15.442 --rc genhtml_function_coverage=1 00:34:15.442 --rc genhtml_legend=1 00:34:15.442 --rc geninfo_all_blocks=1 00:34:15.442 --rc geninfo_unexecuted_blocks=1 00:34:15.442 00:34:15.442 ' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.442 --rc genhtml_branch_coverage=1 00:34:15.442 --rc genhtml_function_coverage=1 00:34:15.442 --rc genhtml_legend=1 00:34:15.442 --rc geninfo_all_blocks=1 00:34:15.442 --rc geninfo_unexecuted_blocks=1 00:34:15.442 00:34:15.442 ' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.442 --rc genhtml_branch_coverage=1 00:34:15.442 --rc genhtml_function_coverage=1 00:34:15.442 --rc genhtml_legend=1 00:34:15.442 --rc geninfo_all_blocks=1 00:34:15.442 --rc geninfo_unexecuted_blocks=1 00:34:15.442 00:34:15.442 ' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.442 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.443 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:23.584 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:23.584 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:23.584 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:23.584 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.584 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:34:23.585 00:34:23.585 --- 10.0.0.2 ping statistics --- 00:34:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.585 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:34:23.585 00:34:23.585 --- 10.0.0.1 ping statistics --- 00:34:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.585 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.585 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2301149 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2301149 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2301149 ']' 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.585 [2024-11-20 10:51:55.105214] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:23.585 [2024-11-20 10:51:55.106340] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:23.585 [2024-11-20 10:51:55.106393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.585 [2024-11-20 10:51:55.209660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.585 [2024-11-20 10:51:55.260139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.585 [2024-11-20 10:51:55.260201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.585 [2024-11-20 10:51:55.260209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.585 [2024-11-20 10:51:55.260217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.585 [2024-11-20 10:51:55.260224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.585 [2024-11-20 10:51:55.261010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.585 [2024-11-20 10:51:55.338311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.585 [2024-11-20 10:51:55.338593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.585 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 [2024-11-20 10:51:55.965861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.847 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 Malloc0 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 [2024-11-20 10:51:56.050007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2301466 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2301466 /var/tmp/bdevperf.sock 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2301466 ']' 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:23.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.847 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:23.847 [2024-11-20 10:51:56.108922] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:23.847 [2024-11-20 10:51:56.108991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301466 ] 00:34:23.847 [2024-11-20 10:51:56.200410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.108 [2024-11-20 10:51:56.253877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.681 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.681 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:24.681 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:24.681 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.681 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:24.681 NVMe0n1 00:34:24.681 10:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.681 10:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:24.943 Running I/O for 10 seconds... 00:34:26.831 8870.00 IOPS, 34.65 MiB/s [2024-11-20T09:52:00.591Z] 9182.50 IOPS, 35.87 MiB/s [2024-11-20T09:52:01.162Z] 9790.00 IOPS, 38.24 MiB/s [2024-11-20T09:52:02.546Z] 10755.00 IOPS, 42.01 MiB/s [2024-11-20T09:52:03.488Z] 11368.80 IOPS, 44.41 MiB/s [2024-11-20T09:52:04.429Z] 11781.83 IOPS, 46.02 MiB/s [2024-11-20T09:52:05.372Z] 12099.14 IOPS, 47.26 MiB/s [2024-11-20T09:52:06.312Z] 12290.75 IOPS, 48.01 MiB/s [2024-11-20T09:52:07.254Z] 12475.56 IOPS, 48.73 MiB/s [2024-11-20T09:52:07.514Z] 12597.50 IOPS, 49.21 MiB/s 00:34:35.138 Latency(us) 00:34:35.138 [2024-11-20T09:52:07.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.138 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:35.138 Verification LBA range: start 0x0 length 0x4000 00:34:35.138 NVMe0n1 : 10.08 12591.24 49.18 0.00 0.00 80740.91 15510.19 66846.72 00:34:35.138 [2024-11-20T09:52:07.514Z] =================================================================================================================== 00:34:35.138 [2024-11-20T09:52:07.514Z] Total : 12591.24 49.18 0.00 0.00 80740.91 15510.19 66846.72 00:34:35.138 { 00:34:35.138 "results": [ 00:34:35.138 { 00:34:35.138 "job": "NVMe0n1", 00:34:35.138 "core_mask": "0x1", 00:34:35.138 "workload": "verify", 00:34:35.138 "status": "finished", 00:34:35.138 "verify_range": { 00:34:35.138 "start": 0, 00:34:35.138 "length": 16384 00:34:35.138 }, 00:34:35.138 "queue_depth": 1024, 00:34:35.138 "io_size": 4096, 00:34:35.138 "runtime": 10.083753, 00:34:35.138 "iops": 12591.244549524368, 00:34:35.138 "mibps": 49.18454902157956, 00:34:35.138 "io_failed": 0, 00:34:35.138 "io_timeout": 0, 00:34:35.138 "avg_latency_us": 80740.91083530893, 00:34:35.139 "min_latency_us": 15510.186666666666, 00:34:35.139 "max_latency_us": 66846.72 00:34:35.139 } 00:34:35.139 ], 00:34:35.139 "core_count": 1 00:34:35.139 } 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2301466 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2301466 ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2301466 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2301466 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2301466' 00:34:35.139 killing process with pid 2301466 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2301466 00:34:35.139 Received shutdown signal, test time was about 10.000000 seconds 00:34:35.139 00:34:35.139 Latency(us) 00:34:35.139 [2024-11-20T09:52:07.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.139 [2024-11-20T09:52:07.515Z] =================================================================================================================== 00:34:35.139 [2024-11-20T09:52:07.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2301466 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.139 rmmod nvme_tcp 00:34:35.139 rmmod nvme_fabrics 00:34:35.139 rmmod nvme_keyring 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2301149 ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2301149 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2301149 ']' 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2301149 00:34:35.139 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2301149 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2301149' 00:34:35.399 killing process with pid 2301149 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2301149 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2301149 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.399 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.945 00:34:37.945 real 0m22.489s 00:34:37.945 user 0m24.740s 00:34:37.945 sys 0m7.407s 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:37.945 ************************************ 00:34:37.945 END TEST nvmf_queue_depth 00:34:37.945 ************************************ 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:37.945 ************************************ 00:34:37.945 START TEST nvmf_target_multipath 00:34:37.945 ************************************ 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:37.945 * Looking for test storage... 00:34:37.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:37.945 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.945 --rc genhtml_branch_coverage=1 00:34:37.945 --rc genhtml_function_coverage=1 00:34:37.945 --rc genhtml_legend=1 00:34:37.945 --rc geninfo_all_blocks=1 00:34:37.945 --rc geninfo_unexecuted_blocks=1 00:34:37.945 00:34:37.945 ' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.945 --rc genhtml_branch_coverage=1 00:34:37.945 --rc genhtml_function_coverage=1 00:34:37.945 --rc genhtml_legend=1 00:34:37.945 --rc geninfo_all_blocks=1 00:34:37.945 --rc geninfo_unexecuted_blocks=1 00:34:37.945 00:34:37.945 ' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.945 --rc genhtml_branch_coverage=1 00:34:37.945 --rc genhtml_function_coverage=1 00:34:37.945 --rc genhtml_legend=1 00:34:37.945 --rc geninfo_all_blocks=1 00:34:37.945 --rc geninfo_unexecuted_blocks=1 00:34:37.945 00:34:37.945 ' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.945 --rc genhtml_branch_coverage=1 00:34:37.945 --rc genhtml_function_coverage=1 00:34:37.945 --rc genhtml_legend=1 00:34:37.945 --rc geninfo_all_blocks=1 00:34:37.945 --rc geninfo_unexecuted_blocks=1 00:34:37.945 00:34:37.945 ' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.945 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:37.946 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.087 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:46.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:46.088 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:46.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:46.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:46.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:34:46.088 00:34:46.088 --- 10.0.0.2 ping statistics --- 00:34:46.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.088 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:46.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:34:46.088 00:34:46.088 --- 10.0.0.1 ping statistics --- 00:34:46.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.088 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:46.088 only one NIC for nvmf test 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.088 rmmod nvme_tcp 00:34:46.088 rmmod nvme_fabrics 00:34:46.088 rmmod nvme_keyring 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:46.088 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.089 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.475 00:34:47.475 real 0m9.953s 00:34:47.475 user 0m2.163s 00:34:47.475 sys 0m5.728s 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.475 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:47.475 ************************************ 00:34:47.475 END TEST nvmf_target_multipath 00:34:47.475 ************************************ 00:34:47.736 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:47.736 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:47.736 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.736 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.736 ************************************ 00:34:47.736 START TEST nvmf_zcopy 00:34:47.736 ************************************ 00:34:47.736 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:47.736 * Looking for test storage... 00:34:47.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:47.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.736 --rc genhtml_branch_coverage=1 00:34:47.736 --rc genhtml_function_coverage=1 00:34:47.736 --rc genhtml_legend=1 00:34:47.736 --rc geninfo_all_blocks=1 00:34:47.736 --rc geninfo_unexecuted_blocks=1 00:34:47.736 00:34:47.736 ' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:47.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.736 --rc genhtml_branch_coverage=1 00:34:47.736 --rc genhtml_function_coverage=1 00:34:47.736 --rc genhtml_legend=1 00:34:47.736 --rc geninfo_all_blocks=1 00:34:47.736 --rc geninfo_unexecuted_blocks=1 00:34:47.736 00:34:47.736 ' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:47.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.736 --rc genhtml_branch_coverage=1 00:34:47.736 --rc genhtml_function_coverage=1 00:34:47.736 --rc genhtml_legend=1 00:34:47.736 --rc geninfo_all_blocks=1 00:34:47.736 --rc geninfo_unexecuted_blocks=1 00:34:47.736 00:34:47.736 ' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:47.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.736 --rc genhtml_branch_coverage=1 00:34:47.736 --rc genhtml_function_coverage=1 00:34:47.736 --rc genhtml_legend=1 00:34:47.736 --rc geninfo_all_blocks=1 00:34:47.736 --rc geninfo_unexecuted_blocks=1 00:34:47.736 00:34:47.736 ' 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.736 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.737 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:47.998 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.999 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:56.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.141 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:56.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:56.142 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:56.142 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:34:56.142 00:34:56.142 --- 10.0.0.2 ping statistics --- 00:34:56.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.142 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:56.142 00:34:56.142 --- 10.0.0.1 ping statistics --- 00:34:56.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.142 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2311790 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2311790 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2311790 ']' 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.142 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 [2024-11-20 10:52:27.712157] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.142 [2024-11-20 10:52:27.713287] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:56.142 [2024-11-20 10:52:27.713338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.142 [2024-11-20 10:52:27.812169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.142 [2024-11-20 10:52:27.862035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.142 [2024-11-20 10:52:27.862086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.142 [2024-11-20 10:52:27.862094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.142 [2024-11-20 10:52:27.862102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.142 [2024-11-20 10:52:27.862108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.142 [2024-11-20 10:52:27.862834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.143 [2024-11-20 10:52:27.940024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.143 [2024-11-20 10:52:27.940323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.404 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.404 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 [2024-11-20 10:52:28.579726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 [2024-11-20 10:52:28.607986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 malloc0 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.405 { 00:34:56.405 "params": { 00:34:56.405 "name": "Nvme$subsystem", 00:34:56.405 "trtype": "$TEST_TRANSPORT", 00:34:56.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.405 "adrfam": "ipv4", 00:34:56.405 "trsvcid": "$NVMF_PORT", 00:34:56.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.405 "hdgst": ${hdgst:-false}, 00:34:56.405 "ddgst": ${ddgst:-false} 00:34:56.405 }, 00:34:56.405 "method": "bdev_nvme_attach_controller" 00:34:56.405 } 00:34:56.405 EOF 00:34:56.405 )") 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:56.405 10:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.405 "params": { 00:34:56.405 "name": "Nvme1", 00:34:56.405 "trtype": "tcp", 00:34:56.405 "traddr": "10.0.0.2", 00:34:56.405 "adrfam": "ipv4", 00:34:56.405 "trsvcid": "4420", 00:34:56.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.405 "hdgst": false, 00:34:56.405 "ddgst": false 00:34:56.405 }, 00:34:56.405 "method": "bdev_nvme_attach_controller" 00:34:56.405 }' 00:34:56.405 [2024-11-20 10:52:28.714951] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:34:56.405 [2024-11-20 10:52:28.715018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312109 ] 00:34:56.666 [2024-11-20 10:52:28.806208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.666 [2024-11-20 10:52:28.860017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.666 Running I/O for 10 seconds... 00:34:58.995 6319.00 IOPS, 49.37 MiB/s [2024-11-20T09:52:32.315Z] 6353.00 IOPS, 49.63 MiB/s [2024-11-20T09:52:33.257Z] 6377.00 IOPS, 49.82 MiB/s [2024-11-20T09:52:34.201Z] 6378.50 IOPS, 49.83 MiB/s [2024-11-20T09:52:35.143Z] 6391.40 IOPS, 49.93 MiB/s [2024-11-20T09:52:36.085Z] 6440.50 IOPS, 50.32 MiB/s [2024-11-20T09:52:37.470Z] 6887.71 IOPS, 53.81 MiB/s [2024-11-20T09:52:38.468Z] 7225.75 IOPS, 56.45 MiB/s [2024-11-20T09:52:39.139Z] 7491.56 IOPS, 58.53 MiB/s [2024-11-20T09:52:39.139Z] 7699.80 IOPS, 60.15 MiB/s 00:35:06.763 Latency(us) 00:35:06.763 [2024-11-20T09:52:39.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:06.763 Verification LBA range: start 0x0 length 0x1000 00:35:06.763 Nvme1n1 : 10.01 7702.25 60.17 0.00 0.00 16572.04 955.73 27962.03 00:35:06.763 [2024-11-20T09:52:39.139Z] =================================================================================================================== 00:35:06.763 [2024-11-20T09:52:39.139Z] Total : 7702.25 60.17 0.00 0.00 16572.04 955.73 27962.03 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2313930 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.035 { 00:35:07.035 "params": { 00:35:07.035 "name": "Nvme$subsystem", 00:35:07.035 "trtype": "$TEST_TRANSPORT", 00:35:07.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.035 "adrfam": "ipv4", 00:35:07.035 "trsvcid": "$NVMF_PORT", 00:35:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.035 "hdgst": ${hdgst:-false}, 00:35:07.035 "ddgst": ${ddgst:-false} 00:35:07.035 }, 00:35:07.035 "method": "bdev_nvme_attach_controller" 00:35:07.035 } 00:35:07.035 EOF 00:35:07.035 )") 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:07.035 [2024-11-20 10:52:39.163259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.163288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:07.035 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:07.035 "params": { 00:35:07.035 "name": "Nvme1", 00:35:07.035 "trtype": "tcp", 00:35:07.035 "traddr": "10.0.0.2", 00:35:07.035 "adrfam": "ipv4", 00:35:07.035 "trsvcid": "4420", 00:35:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:07.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:07.035 "hdgst": false, 00:35:07.035 "ddgst": false 00:35:07.035 }, 00:35:07.035 "method": "bdev_nvme_attach_controller" 00:35:07.035 }' 00:35:07.035 [2024-11-20 10:52:39.175229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.175240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.187227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.187236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.199228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.199237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.211228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.211239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.218623] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:35:07.035 [2024-11-20 10:52:39.218673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313930 ] 00:35:07.035 [2024-11-20 10:52:39.223228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.223237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.235227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.235236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.247227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.247235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.259227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.259235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.271227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.271235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.283227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.283236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.295226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.295236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.300316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.035 [2024-11-20 10:52:39.307228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.307237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.319227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.319238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.330109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.035 [2024-11-20 10:52:39.331228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.331237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.343231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.343243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.355232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.355245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.367228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.367237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.379228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.035 [2024-11-20 10:52:39.379238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.035 [2024-11-20 10:52:39.391227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.036 [2024-11-20 10:52:39.391236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.036 [2024-11-20 10:52:39.403237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.036 [2024-11-20 10:52:39.403255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.415230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.415242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.427232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.427246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.439230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.439241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.452045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.452061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.463229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.463242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 Running I/O for 5 seconds... 00:35:07.303 [2024-11-20 10:52:39.478142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.478165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.491223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.491239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.504104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.504121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.518044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.518060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.530921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.530937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.543918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.543934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.558457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.558474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.571260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.571276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.584460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.584476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.598506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.598522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.611924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.611939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.626460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.626475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.639456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.639471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.652297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.303 [2024-11-20 10:52:39.652312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.303 [2024-11-20 10:52:39.666404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.304 [2024-11-20 10:52:39.666419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.679651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.679666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.694096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.694111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.707064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.707079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.720438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.720453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.734875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.734890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.748025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.748039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.762882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.762897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.776049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.776064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.790997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.791012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.804018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.804033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.817872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.817887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.830627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.830642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.843463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.843478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.856369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.856384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.870451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.870465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.883599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.883613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.898413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.898428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.911307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.911323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.565 [2024-11-20 10:52:39.924017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.565 [2024-11-20 10:52:39.924031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:39.938549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:39.938566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:39.951859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:39.951874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:39.967212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:39.967228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:39.980176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:39.980190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:39.992627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:39.992642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.006931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:40.006947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.020024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:40.020040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.034405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:40.034420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.047554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:40.047569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.062551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.826 [2024-11-20 10:52:40.062567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.826 [2024-11-20 10:52:40.075673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.075688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.090366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.090381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.103401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.103417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.116623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.116639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.130488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.130503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.143803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.143818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.158637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.158652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.171342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.171357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.184229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.184244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.827 [2024-11-20 10:52:40.198110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.827 [2024-11-20 10:52:40.198125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.211189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.211204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.224730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.224745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.238389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.238405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.251437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.251452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.264241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.264255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.278446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.278461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.291626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.291641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.306339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.306354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.087 [2024-11-20 10:52:40.319237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.087 [2024-11-20 10:52:40.319252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.331980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.331994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.346748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.346763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.360000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.360015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.374388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.374403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.387475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.387490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.400483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.400498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.414675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.414690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.427778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.427793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.442357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.442372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.088 [2024-11-20 10:52:40.455236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.088 [2024-11-20 10:52:40.455251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.348 18965.00 IOPS, 148.16 MiB/s [2024-11-20T09:52:40.725Z] [2024-11-20 10:52:40.468348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.468363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.482528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.482542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.495794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.495808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.510428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.510443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.523760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.523775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.537819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.537834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.550418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.550432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.563374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.563389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.575692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.575711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.590520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.590535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.603972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.603987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.618967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.618983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.632185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.632200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.646327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.646343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.659546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.659561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.674461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.674476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.687096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.687112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.699884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.699899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.349 [2024-11-20 10:52:40.714381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.349 [2024-11-20 10:52:40.714397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.727439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.727455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.740278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.740293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.754542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.754556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.767549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.767564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.782408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.782423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.795537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.795552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.810284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.810300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.823395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.609 [2024-11-20 10:52:40.823411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.609 [2024-11-20 10:52:40.836136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.836156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.850360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.850375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.863604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.863619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.878499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.878514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.891819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.891834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.906494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.906510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.919585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.919600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.934666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.934681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.947800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.947815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.962478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.962494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.610 [2024-11-20 10:52:40.975429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.610 [2024-11-20 10:52:40.975444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:40.988334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:40.988349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.002794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.002810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.016025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.016040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.030437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.030452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.043726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.043741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.059092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.059106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.072296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.072310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.086202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.086216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.099560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.099580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.114257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.114272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.127246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.127261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.140173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.140188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.154682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.154698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.167815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.167830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.182463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.182479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.195602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.195618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.210792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.210808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.224294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.224309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:08.871 [2024-11-20 10:52:41.238627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:08.871 [2024-11-20 10:52:41.238643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.251833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.251849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.266257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.266273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.279041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.279057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.292449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.292464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.306446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.306461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.319482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.319497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.332326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.332341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.346678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.346693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.359581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.359596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.374463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.374478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.387593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.387609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.402284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.402300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.415468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.415483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.428133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.428148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.442440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.442455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.455614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.455629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.470074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.470089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 18969.50 IOPS, 148.20 MiB/s [2024-11-20T09:52:41.508Z] [2024-11-20 10:52:41.483263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.483278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.132 [2024-11-20 10:52:41.496031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.132 [2024-11-20 10:52:41.496045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.510418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.510434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.523574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.523588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.538418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.538433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.551579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.551594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.566098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.566112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.579342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.579357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.592326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.592340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.606714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.606730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.619865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.619880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.634048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.634063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.647078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.647094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.659857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.659872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.674954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.674969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.688216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.688231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.702540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.702555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.715598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.715613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.730337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.730352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.743198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.743213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.393 [2024-11-20 10:52:41.756510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.393 [2024-11-20 10:52:41.756525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.770690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.770705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.783724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.783739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.798348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.798363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.811576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.811590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.826083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.826098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.839309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.839323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.852319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.852333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.654 [2024-11-20 10:52:41.866788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.654 [2024-11-20 10:52:41.866807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.879895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.879910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.894586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.894602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.907881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.907896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.922483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.922498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.935533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.935548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.950569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.950584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.963679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.963693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.978682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.978697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:41.992181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:41.992196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:42.006682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:42.006697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.655 [2024-11-20 10:52:42.019941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.655 [2024-11-20 10:52:42.019956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.034301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.034316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.047460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.047477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.060903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.060917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.074653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.074668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.087479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.087494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.100323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.100338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.114368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.114383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.127500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.127520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.142429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.142444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.155604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.155619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.169913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.169928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.182762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.182777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.195718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.195733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.210894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.210909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.224069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.224084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.238671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.238687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.251768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.251783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.266291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.266307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:09.917 [2024-11-20 10:52:42.279328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:09.917 [2024-11-20 10:52:42.279343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.291783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.291799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.306273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.306288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.319197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.319213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.332223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.332237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.346650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.346666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.359711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.359726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.374537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.374552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.387706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.387725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.402849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.402864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.416164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.416179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.430122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.430138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.443558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.443573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.458024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.458040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.471357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.471372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 18973.00 IOPS, 148.23 MiB/s [2024-11-20T09:52:42.555Z] [2024-11-20 10:52:42.484420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.484436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.498598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.498615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.512232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.512248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.526586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.526602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.179 [2024-11-20 10:52:42.539560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.179 [2024-11-20 10:52:42.539575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.554258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.554273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.567664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.567679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.582220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.582235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.595296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.595311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.608133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.608148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.622521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.622536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.635823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.635838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.650191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.650206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.662985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.663000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.676388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.676403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.690313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.690328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.703083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.703099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.715987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.716002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.731230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.731245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.744077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.744092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.758075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.758090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.771196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.771211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.783970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.783985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.798852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.798867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.441 [2024-11-20 10:52:42.812062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.441 [2024-11-20 10:52:42.812077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.826435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.826451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.839649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.839664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.854439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.854455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.867716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.867730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.882178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.882194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.894999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.895014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.908509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.908523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.922249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.922265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.935409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.935425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.948284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.948299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.962497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.962513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.976031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.976045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:42.990627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:42.990642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.003432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.003447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.015575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.015590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.030362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.030378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.043288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.043303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.055989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.056004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.702 [2024-11-20 10:52:43.070369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.702 [2024-11-20 10:52:43.070384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.083362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.083378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.096315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.096330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.110478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.110493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.123597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.123611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.138335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.138350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.151668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.151683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.166660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.166675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.180022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.180037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.194255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.194271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.207343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.207358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.220210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.220225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.234233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.234248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.246971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.246986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.259636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.259650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.274353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.274369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.287548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.287563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.302429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.302445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.315583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.315598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:10.964 [2024-11-20 10:52:43.330040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:10.964 [2024-11-20 10:52:43.330056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.343189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.343206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.356073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.356087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.370884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.370899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.383766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.383780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.398454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.398470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.411478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.411493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.424594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.424609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.438569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.438584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.451359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.451374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.463813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.463827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 18983.75 IOPS, 148.31 MiB/s [2024-11-20T09:52:43.601Z] [2024-11-20 10:52:43.478645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.478660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.492036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.492051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.506368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.506383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.519344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.519359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.532135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.532150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.546609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.546624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.559596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.559611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.574165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.574180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.225 [2024-11-20 10:52:43.587305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.225 [2024-11-20 10:52:43.587320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.599957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.599972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.614862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.614877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.627992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.628006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.642667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.642682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.655796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.655810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.670441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.670460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.683868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.683883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.698265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.698280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.711511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.711526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.725962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.725977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.738919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.738934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.752004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.752018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.765773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.765787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.778976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.778991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.792252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.792266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.806369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.806384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.819456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.819471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.832303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.832318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.486 [2024-11-20 10:52:43.846326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.486 [2024-11-20 10:52:43.846341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.859386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.859401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.872192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.872207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.886826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.886841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.899850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.899864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.914303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.914318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.927440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.927459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.940442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.940456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.954521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.954536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.967475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.967490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.980176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.980190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:43.995062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:43.995077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.007900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.007914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.022447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.022462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.035445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.035460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.048488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.048502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.062357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.062372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.075611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.075625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.090708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.090723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.103338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.103353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:11.747 [2024-11-20 10:52:44.115457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:11.747 [2024-11-20 10:52:44.115471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.128232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.128247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.142388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.142403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.155523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.155538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.170326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.170341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.183356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.183378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.196194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.196210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.210605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.210620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.223604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.223619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.238459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.238475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.251605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.251620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.266679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.266694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.279641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.279656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.294319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.294334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.307681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.307696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.321932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.321947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.334880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.334895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.347821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.347836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.362883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.362898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.009 [2024-11-20 10:52:44.375798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.009 [2024-11-20 10:52:44.375813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.390581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.390597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.403665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.403680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.418274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.418289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.431263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.431281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.443967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.443982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.458959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.458975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.471997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.472011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 19004.80 IOPS, 148.47 MiB/s [2024-11-20T09:52:44.647Z] [2024-11-20 10:52:44.483709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.483724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 00:35:12.271 Latency(us) 00:35:12.271 [2024-11-20T09:52:44.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.271 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:12.271 Nvme1n1 : 5.01 19009.52 148.51 0.00 0.00 6727.30 2689.71 11304.96 00:35:12.271 [2024-11-20T09:52:44.647Z] =================================================================================================================== 00:35:12.271 [2024-11-20T09:52:44.647Z] Total : 19009.52 148.51 0.00 0.00 6727.30 2689.71 11304.96 00:35:12.271 [2024-11-20 10:52:44.495232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.495247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.507234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.507247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.519236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.519250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.531233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.531246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.543229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.543240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.555229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.555238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.567231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.567242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 [2024-11-20 10:52:44.579227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:12.271 [2024-11-20 10:52:44.579236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2313930) - No such process 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2313930 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.271 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:12.271 delay0 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.272 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:12.532 [2024-11-20 10:52:44.752661] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:19.111 [2024-11-20 10:52:51.151288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180f60 is same with the state(6) to be set 00:35:19.111 Initializing NVMe Controllers 00:35:19.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:19.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:19.111 Initialization complete. Launching workers. 00:35:19.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 9606 00:35:19.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9831, failed to submit 66 00:35:19.111 success 9720, unsuccessful 111, failed 0 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.111 rmmod nvme_tcp 00:35:19.111 rmmod nvme_fabrics 00:35:19.111 rmmod nvme_keyring 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2311790 ']' 00:35:19.111 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2311790 ']' 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311790' 00:35:19.112 killing process with pid 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2311790 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.112 10:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.655 00:35:21.655 real 0m33.584s 00:35:21.655 user 0m42.673s 00:35:21.655 sys 0m12.075s 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:21.655 ************************************ 00:35:21.655 END TEST nvmf_zcopy 00:35:21.655 ************************************ 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:21.655 ************************************ 00:35:21.655 START TEST nvmf_nmic 00:35:21.655 ************************************ 00:35:21.655 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:21.655 * Looking for test storage... 00:35:21.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.656 --rc genhtml_branch_coverage=1 00:35:21.656 --rc genhtml_function_coverage=1 00:35:21.656 --rc genhtml_legend=1 00:35:21.656 --rc geninfo_all_blocks=1 00:35:21.656 --rc geninfo_unexecuted_blocks=1 00:35:21.656 00:35:21.656 ' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.656 --rc genhtml_branch_coverage=1 00:35:21.656 --rc genhtml_function_coverage=1 00:35:21.656 --rc genhtml_legend=1 00:35:21.656 --rc geninfo_all_blocks=1 00:35:21.656 --rc geninfo_unexecuted_blocks=1 00:35:21.656 00:35:21.656 ' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.656 --rc genhtml_branch_coverage=1 00:35:21.656 --rc genhtml_function_coverage=1 00:35:21.656 --rc genhtml_legend=1 00:35:21.656 --rc geninfo_all_blocks=1 00:35:21.656 --rc geninfo_unexecuted_blocks=1 00:35:21.656 00:35:21.656 ' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.656 --rc genhtml_branch_coverage=1 00:35:21.656 --rc genhtml_function_coverage=1 00:35:21.656 --rc genhtml_legend=1 00:35:21.656 --rc geninfo_all_blocks=1 00:35:21.656 --rc geninfo_unexecuted_blocks=1 00:35:21.656 00:35:21.656 ' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.656 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.657 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:29.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:29.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:29.800 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:29.800 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.800 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.801 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.801 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.801 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.801 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.801 10:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:35:29.801 00:35:29.801 --- 10.0.0.2 ping statistics --- 00:35:29.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.801 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:35:29.801 00:35:29.801 --- 10.0.0.1 ping statistics --- 00:35:29.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.801 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2320485 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2320485 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2320485 ']' 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.801 10:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:29.801 [2024-11-20 10:53:01.377118] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:29.801 [2024-11-20 10:53:01.378273] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:35:29.801 [2024-11-20 10:53:01.378329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.801 [2024-11-20 10:53:01.479013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:29.801 [2024-11-20 10:53:01.532997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.801 [2024-11-20 10:53:01.533052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.801 [2024-11-20 10:53:01.533061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.801 [2024-11-20 10:53:01.533069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.801 [2024-11-20 10:53:01.533079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.801 [2024-11-20 10:53:01.535451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.801 [2024-11-20 10:53:01.535710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.801 [2024-11-20 10:53:01.535876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:29.801 [2024-11-20 10:53:01.535878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.801 [2024-11-20 10:53:01.613029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:29.801 [2024-11-20 10:53:01.614381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:29.801 [2024-11-20 10:53:01.614512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:29.801 [2024-11-20 10:53:01.614811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:29.801 [2024-11-20 10:53:01.614871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 [2024-11-20 10:53:02.236885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 Malloc0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 [2024-11-20 10:53:02.329177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:30.064 test case1: single bdev can't be used in multiple subsystems 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 [2024-11-20 10:53:02.364505] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:30.064 [2024-11-20 10:53:02.364532] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:30.064 [2024-11-20 10:53:02.364541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:30.064 request: 00:35:30.064 { 00:35:30.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:30.064 "namespace": { 00:35:30.064 "bdev_name": "Malloc0", 00:35:30.064 "no_auto_visible": false 00:35:30.064 }, 00:35:30.064 "method": "nvmf_subsystem_add_ns", 00:35:30.064 "req_id": 1 00:35:30.064 } 00:35:30.064 Got JSON-RPC error response 00:35:30.064 response: 00:35:30.064 { 00:35:30.064 "code": -32602, 00:35:30.064 "message": "Invalid parameters" 00:35:30.064 } 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:30.064 Adding namespace failed - expected result. 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:30.064 test case2: host connect to nvmf target in multiple paths 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:30.064 [2024-11-20 10:53:02.376663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.064 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:30.637 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:31.210 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:31.210 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:31.210 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:31.210 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:31.210 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:33.123 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:33.123 [global] 00:35:33.123 thread=1 00:35:33.123 invalidate=1 00:35:33.123 rw=write 00:35:33.123 time_based=1 00:35:33.123 runtime=1 00:35:33.123 ioengine=libaio 00:35:33.123 direct=1 00:35:33.123 bs=4096 00:35:33.123 iodepth=1 00:35:33.123 norandommap=0 00:35:33.123 numjobs=1 00:35:33.123 00:35:33.123 verify_dump=1 00:35:33.123 verify_backlog=512 00:35:33.123 verify_state_save=0 00:35:33.123 do_verify=1 00:35:33.123 verify=crc32c-intel 00:35:33.123 [job0] 00:35:33.123 filename=/dev/nvme0n1 00:35:33.123 Could not set queue depth (nvme0n1) 00:35:33.383 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:33.383 fio-3.35 00:35:33.383 Starting 1 thread 00:35:34.768 00:35:34.768 job0: (groupid=0, jobs=1): err= 0: pid=2321361: Wed Nov 20 10:53:06 2024 00:35:34.768 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:34.768 slat (nsec): min=8040, max=26430, avg=24867.88, stdev=2313.05 00:35:34.768 clat (usec): min=671, max=1136, avg=959.21, stdev=64.33 00:35:34.768 lat (usec): min=696, max=1162, avg=984.08, stdev=65.03 00:35:34.768 clat percentiles (usec): 00:35:34.768 | 1.00th=[ 742], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 914], 00:35:34.768 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:35:34.768 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:35:34.768 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:35:34.768 | 99.99th=[ 1139] 00:35:34.768 write: IOPS=803, BW=3213KiB/s (3290kB/s)(3216KiB/1001msec); 0 zone resets 00:35:34.768 slat (nsec): min=9704, max=65491, avg=27927.77, stdev=9592.51 00:35:34.768 clat (usec): min=212, max=848, avg=577.62, stdev=102.95 00:35:34.768 lat (usec): min=222, max=859, avg=605.55, stdev=107.16 00:35:34.768 clat percentiles (usec): 00:35:34.768 | 1.00th=[ 334], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 490], 00:35:34.768 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:35:34.768 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:35:34.768 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 848], 99.95th=[ 848], 00:35:34.768 | 99.99th=[ 848] 00:35:34.768 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.768 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.768 lat (usec) : 250=0.30%, 500=13.53%, 750=46.43%, 1000=31.00% 00:35:34.768 lat (msec) : 2=8.74% 00:35:34.768 cpu : usr=2.60%, sys=2.90%, ctx=1316, majf=0, minf=1 00:35:34.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.768 issued rwts: total=512,804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.768 00:35:34.768 Run status group 0 (all jobs): 00:35:34.768 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:34.768 WRITE: bw=3213KiB/s (3290kB/s), 3213KiB/s-3213KiB/s (3290kB/s-3290kB/s), io=3216KiB (3293kB), run=1001-1001msec 00:35:34.768 00:35:34.768 Disk stats (read/write): 00:35:34.768 nvme0n1: ios=562/635, merge=0/0, ticks=895/371, in_queue=1266, util=97.60% 00:35:34.768 10:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:34.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.768 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.768 rmmod nvme_tcp 00:35:34.768 rmmod nvme_fabrics 00:35:34.768 rmmod nvme_keyring 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2320485 ']' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2320485 ']' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320485' 00:35:35.029 killing process with pid 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2320485 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.029 10:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.579 00:35:37.579 real 0m15.867s 00:35:37.579 user 0m36.449s 00:35:37.579 sys 0m7.416s 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:37.579 ************************************ 00:35:37.579 END TEST nvmf_nmic 00:35:37.579 ************************************ 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:37.579 ************************************ 00:35:37.579 START TEST nvmf_fio_target 00:35:37.579 ************************************ 00:35:37.579 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:37.579 * Looking for test storage... 00:35:37.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:37.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.580 --rc genhtml_branch_coverage=1 00:35:37.580 --rc genhtml_function_coverage=1 00:35:37.580 --rc genhtml_legend=1 00:35:37.580 --rc geninfo_all_blocks=1 00:35:37.580 --rc geninfo_unexecuted_blocks=1 00:35:37.580 00:35:37.580 ' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:37.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.580 --rc genhtml_branch_coverage=1 00:35:37.580 --rc genhtml_function_coverage=1 00:35:37.580 --rc genhtml_legend=1 00:35:37.580 --rc geninfo_all_blocks=1 00:35:37.580 --rc geninfo_unexecuted_blocks=1 00:35:37.580 00:35:37.580 ' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:37.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.580 --rc genhtml_branch_coverage=1 00:35:37.580 --rc genhtml_function_coverage=1 00:35:37.580 --rc genhtml_legend=1 00:35:37.580 --rc geninfo_all_blocks=1 00:35:37.580 --rc geninfo_unexecuted_blocks=1 00:35:37.580 00:35:37.580 ' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:37.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.580 --rc genhtml_branch_coverage=1 00:35:37.580 --rc genhtml_function_coverage=1 00:35:37.580 --rc genhtml_legend=1 00:35:37.580 --rc geninfo_all_blocks=1 00:35:37.580 --rc geninfo_unexecuted_blocks=1 00:35:37.580 00:35:37.580 ' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.580 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.581 10:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:45.722 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:45.722 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.722 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:45.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:45.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.723 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:35:45.723 00:35:45.723 --- 10.0.0.2 ping statistics --- 00:35:45.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.723 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:35:45.723 00:35:45.723 --- 10.0.0.1 ping statistics --- 00:35:45.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.723 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2325822 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2325822 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2325822 ']' 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.723 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.723 [2024-11-20 10:53:17.383132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.723 [2024-11-20 10:53:17.384285] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:35:45.723 [2024-11-20 10:53:17.384340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.723 [2024-11-20 10:53:17.487267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.723 [2024-11-20 10:53:17.541046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.723 [2024-11-20 10:53:17.541098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.723 [2024-11-20 10:53:17.541107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.723 [2024-11-20 10:53:17.541115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.723 [2024-11-20 10:53:17.541121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.723 [2024-11-20 10:53:17.543314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.723 [2024-11-20 10:53:17.543474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.723 [2024-11-20 10:53:17.543633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.723 [2024-11-20 10:53:17.543633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.723 [2024-11-20 10:53:17.621033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.723 [2024-11-20 10:53:17.622417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.723 [2024-11-20 10:53:17.622568] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.723 [2024-11-20 10:53:17.622870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.723 [2024-11-20 10:53:17.622931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.985 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:46.246 [2024-11-20 10:53:18.400553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.246 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:46.507 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:46.507 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:46.507 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:46.507 10:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:46.768 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:46.768 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:47.036 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:47.036 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:47.303 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:47.303 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:47.303 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:47.618 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:47.618 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:47.905 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:47.905 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:47.905 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:48.185 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:48.185 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:48.445 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:48.445 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:48.445 10:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.706 [2024-11-20 10:53:20.976472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.706 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:48.967 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:49.227 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:49.797 10:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:51.706 10:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:51.706 [global] 00:35:51.706 thread=1 00:35:51.706 invalidate=1 00:35:51.706 rw=write 00:35:51.706 time_based=1 00:35:51.706 runtime=1 00:35:51.706 ioengine=libaio 00:35:51.706 direct=1 00:35:51.706 bs=4096 00:35:51.706 iodepth=1 00:35:51.706 norandommap=0 00:35:51.706 numjobs=1 00:35:51.706 00:35:51.706 verify_dump=1 00:35:51.706 verify_backlog=512 00:35:51.706 verify_state_save=0 00:35:51.706 do_verify=1 00:35:51.706 verify=crc32c-intel 00:35:51.706 [job0] 00:35:51.706 filename=/dev/nvme0n1 00:35:51.706 [job1] 00:35:51.706 filename=/dev/nvme0n2 00:35:51.706 [job2] 00:35:51.706 filename=/dev/nvme0n3 00:35:51.706 [job3] 00:35:51.706 filename=/dev/nvme0n4 00:35:51.706 Could not set queue depth (nvme0n1) 00:35:51.706 Could not set queue depth (nvme0n2) 00:35:51.706 Could not set queue depth (nvme0n3) 00:35:51.706 Could not set queue depth (nvme0n4) 00:35:52.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:52.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:52.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:52.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:52.275 fio-3.35 00:35:52.275 Starting 4 threads 00:35:53.219 00:35:53.219 job0: (groupid=0, jobs=1): err= 0: pid=2327306: Wed Nov 20 10:53:25 2024 00:35:53.219 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:53.219 slat (nsec): min=25668, max=44709, avg=26546.36, stdev=2367.94 00:35:53.219 clat (usec): min=763, max=1279, avg=1014.29, stdev=76.66 00:35:53.219 lat (usec): min=790, max=1306, avg=1040.84, stdev=76.56 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 832], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 955], 00:35:53.219 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1037], 00:35:53.219 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:35:53.219 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1287], 99.95th=[ 1287], 00:35:53.219 | 99.99th=[ 1287] 00:35:53.219 write: IOPS=729, BW=2917KiB/s (2987kB/s)(2920KiB/1001msec); 0 zone resets 00:35:53.219 slat (nsec): min=8991, max=67558, avg=29224.23, stdev=10189.44 00:35:53.219 clat (usec): min=212, max=1343, avg=597.97, stdev=126.14 00:35:53.219 lat (usec): min=222, max=1370, avg=627.19, stdev=130.67 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 486], 00:35:53.219 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:35:53.219 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 775], 00:35:53.219 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 1352], 99.95th=[ 1352], 00:35:53.219 | 99.99th=[ 1352] 00:35:53.219 bw ( KiB/s): min= 4096, max= 4096, per=32.71%, avg=4096.00, stdev= 0.00, samples=1 00:35:53.219 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:53.219 lat (usec) : 250=0.08%, 500=13.29%, 750=40.02%, 1000=22.06% 00:35:53.219 lat (msec) : 2=24.56% 00:35:53.219 cpu : usr=2.90%, sys=4.30%, ctx=1242, majf=0, minf=2 00:35:53.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 issued rwts: total=512,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.219 job1: (groupid=0, jobs=1): err= 0: pid=2327313: Wed Nov 20 10:53:25 2024 00:35:53.219 read: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec) 00:35:53.219 slat (nsec): min=7125, max=44758, avg=23971.30, stdev=7812.95 00:35:53.219 clat (usec): min=554, max=995, avg=818.66, stdev=69.14 00:35:53.219 lat (usec): min=581, max=1021, avg=842.63, stdev=71.05 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 627], 5.00th=[ 701], 10.00th=[ 725], 20.00th=[ 766], 00:35:53.219 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 840], 00:35:53.219 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 906], 95.00th=[ 922], 00:35:53.219 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 996], 99.95th=[ 996], 00:35:53.219 | 99.99th=[ 996] 00:35:53.219 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:53.219 slat (nsec): min=9898, max=52403, avg=27248.55, stdev=11448.53 00:35:53.219 clat (usec): min=216, max=663, avg=430.42, stdev=71.95 00:35:53.219 lat (usec): min=233, max=697, avg=457.66, stdev=77.78 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 355], 00:35:53.219 | 30.00th=[ 379], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 461], 00:35:53.219 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 537], 00:35:53.219 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 660], 00:35:53.219 | 99.99th=[ 660] 00:35:53.219 bw ( KiB/s): min= 4087, max= 4087, per=32.63%, avg=4087.00, stdev= 0.00, samples=1 00:35:53.219 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:53.219 lat (usec) : 250=0.30%, 500=53.50%, 750=14.75%, 1000=31.44% 00:35:53.219 cpu : usr=2.40%, sys=4.20%, ctx=1645, majf=0, minf=1 00:35:53.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 issued rwts: total=617,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.219 job2: (groupid=0, jobs=1): err= 0: pid=2327321: Wed Nov 20 10:53:25 2024 00:35:53.219 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:53.219 slat (nsec): min=25008, max=58734, avg=26410.07, stdev=3439.68 00:35:53.219 clat (usec): min=689, max=1408, avg=1073.83, stdev=112.79 00:35:53.219 lat (usec): min=715, max=1434, avg=1100.24, stdev=112.87 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 988], 00:35:53.219 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:35:53.219 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1254], 00:35:53.219 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1401], 99.95th=[ 1401], 00:35:53.219 | 99.99th=[ 1401] 00:35:53.219 write: IOPS=656, BW=2625KiB/s (2688kB/s)(2628KiB/1001msec); 0 zone resets 00:35:53.219 slat (nsec): min=9788, max=69593, avg=31077.39, stdev=7759.17 00:35:53.219 clat (usec): min=178, max=1047, avg=619.13, stdev=127.50 00:35:53.219 lat (usec): min=211, max=1080, avg=650.21, stdev=129.92 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 310], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 506], 00:35:53.219 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:35:53.219 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 816], 00:35:53.219 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:53.219 | 99.99th=[ 1045] 00:35:53.219 bw ( KiB/s): min= 4096, max= 4096, per=32.71%, avg=4096.00, stdev= 0.00, samples=1 00:35:53.219 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:53.219 lat (usec) : 250=0.26%, 500=10.27%, 750=37.04%, 1000=19.08% 00:35:53.219 lat (msec) : 2=33.36% 00:35:53.219 cpu : usr=1.80%, sys=3.50%, ctx=1169, majf=0, minf=1 00:35:53.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 issued rwts: total=512,657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.219 job3: (groupid=0, jobs=1): err= 0: pid=2327327: Wed Nov 20 10:53:25 2024 00:35:53.219 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:53.219 slat (nsec): min=7818, max=59518, avg=26207.12, stdev=3395.90 00:35:53.219 clat (usec): min=669, max=1465, avg=1052.30, stdev=129.96 00:35:53.219 lat (usec): min=696, max=1492, avg=1078.51, stdev=129.98 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 947], 00:35:53.219 | 30.00th=[ 996], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1090], 00:35:53.219 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1254], 00:35:53.219 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1467], 99.95th=[ 1467], 00:35:53.219 | 99.99th=[ 1467] 00:35:53.219 write: IOPS=722, BW=2889KiB/s (2958kB/s)(2892KiB/1001msec); 0 zone resets 00:35:53.219 slat (nsec): min=9683, max=51646, avg=24380.79, stdev=11064.54 00:35:53.219 clat (usec): min=235, max=1379, avg=583.24, stdev=155.78 00:35:53.219 lat (usec): min=245, max=1412, avg=607.62, stdev=159.77 00:35:53.219 clat percentiles (usec): 00:35:53.219 | 1.00th=[ 265], 5.00th=[ 359], 10.00th=[ 396], 20.00th=[ 453], 00:35:53.219 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 619], 00:35:53.219 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 840], 00:35:53.219 | 99.00th=[ 963], 99.50th=[ 1123], 99.90th=[ 1385], 99.95th=[ 1385], 00:35:53.219 | 99.99th=[ 1385] 00:35:53.219 bw ( KiB/s): min= 4087, max= 4087, per=32.63%, avg=4087.00, stdev= 0.00, samples=1 00:35:53.219 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:53.219 lat (usec) : 250=0.16%, 500=18.54%, 750=32.63%, 1000=19.68% 00:35:53.219 lat (msec) : 2=28.99% 00:35:53.219 cpu : usr=1.50%, sys=3.30%, ctx=1235, majf=0, minf=2 00:35:53.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.219 issued rwts: total=512,723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:53.219 00:35:53.219 Run status group 0 (all jobs): 00:35:53.219 READ: bw=8603KiB/s (8810kB/s), 2046KiB/s-2466KiB/s (2095kB/s-2525kB/s), io=8612KiB (8819kB), run=1001-1001msec 00:35:53.220 WRITE: bw=12.2MiB/s (12.8MB/s), 2625KiB/s-4092KiB/s (2688kB/s-4190kB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:35:53.220 00:35:53.220 Disk stats (read/write): 00:35:53.220 nvme0n1: ios=535/512, merge=0/0, ticks=507/239, in_queue=746, util=86.67% 00:35:53.220 nvme0n2: ios=540/860, merge=0/0, ticks=1361/360, in_queue=1721, util=96.63% 00:35:53.220 nvme0n3: ios=470/512, merge=0/0, ticks=662/301, in_queue=963, util=90.69% 00:35:53.220 nvme0n4: ios=519/512, merge=0/0, ticks=801/285, in_queue=1086, util=91.33% 00:35:53.220 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:53.481 [global] 00:35:53.481 thread=1 00:35:53.481 invalidate=1 00:35:53.481 rw=randwrite 00:35:53.481 time_based=1 00:35:53.481 runtime=1 00:35:53.481 ioengine=libaio 00:35:53.481 direct=1 00:35:53.481 bs=4096 00:35:53.481 iodepth=1 00:35:53.481 norandommap=0 00:35:53.481 numjobs=1 00:35:53.481 00:35:53.481 verify_dump=1 00:35:53.481 verify_backlog=512 00:35:53.481 verify_state_save=0 00:35:53.481 do_verify=1 00:35:53.481 verify=crc32c-intel 00:35:53.481 [job0] 00:35:53.481 filename=/dev/nvme0n1 00:35:53.481 [job1] 00:35:53.481 filename=/dev/nvme0n2 00:35:53.481 [job2] 00:35:53.481 filename=/dev/nvme0n3 00:35:53.481 [job3] 00:35:53.481 filename=/dev/nvme0n4 00:35:53.481 Could not set queue depth (nvme0n1) 00:35:53.481 Could not set queue depth (nvme0n2) 00:35:53.481 Could not set queue depth (nvme0n3) 00:35:53.481 Could not set queue depth (nvme0n4) 00:35:53.741 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:53.741 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:53.741 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:53.741 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:53.741 fio-3.35 00:35:53.741 Starting 4 threads 00:35:55.127 00:35:55.127 job0: (groupid=0, jobs=1): err= 0: pid=2327814: Wed Nov 20 10:53:27 2024 00:35:55.128 read: IOPS=130, BW=524KiB/s (536kB/s)(528KiB/1008msec) 00:35:55.128 slat (nsec): min=7447, max=29663, avg=24423.99, stdev=4335.39 00:35:55.128 clat (usec): min=548, max=42200, avg=5030.21, stdev=12129.58 00:35:55.128 lat (usec): min=574, max=42226, avg=5054.63, stdev=12129.97 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 594], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 947], 00:35:55.128 | 30.00th=[ 988], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:35:55.128 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1352], 95.00th=[41681], 00:35:55.128 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:55.128 | 99.99th=[42206] 00:35:55.128 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:35:55.128 slat (nsec): min=9423, max=50749, avg=25866.77, stdev=10551.14 00:35:55.128 clat (usec): min=182, max=1009, avg=629.60, stdev=152.10 00:35:55.128 lat (usec): min=193, max=1042, avg=655.47, stdev=155.01 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 297], 5.00th=[ 388], 10.00th=[ 424], 20.00th=[ 494], 00:35:55.128 | 30.00th=[ 545], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 676], 00:35:55.128 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 881], 00:35:55.128 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1012], 99.95th=[ 1012], 00:35:55.128 | 99.99th=[ 1012] 00:35:55.128 bw ( KiB/s): min= 4096, max= 4096, per=43.28%, avg=4096.00, stdev= 0.00, samples=1 00:35:55.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:55.128 lat (usec) : 250=0.31%, 500=16.93%, 750=44.88%, 1000=23.91% 00:35:55.128 lat (msec) : 2=11.96%, 50=2.02% 00:35:55.128 cpu : usr=1.09%, sys=1.39%, ctx=648, majf=0, minf=1 00:35:55.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 issued rwts: total=132,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:55.128 job1: (groupid=0, jobs=1): err= 0: pid=2327822: Wed Nov 20 10:53:27 2024 00:35:55.128 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:55.128 slat (nsec): min=25434, max=60043, avg=26756.64, stdev=2981.59 00:35:55.128 clat (usec): min=737, max=1377, avg=1058.21, stdev=95.21 00:35:55.128 lat (usec): min=764, max=1403, avg=1084.96, stdev=95.18 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 816], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 988], 00:35:55.128 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1090], 00:35:55.128 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:35:55.128 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1385], 99.95th=[ 1385], 00:35:55.128 | 99.99th=[ 1385] 00:35:55.128 write: IOPS=640, BW=2561KiB/s (2623kB/s)(2564KiB/1001msec); 0 zone resets 00:35:55.128 slat (nsec): min=9753, max=52637, avg=29577.04, stdev=9764.91 00:35:55.128 clat (usec): min=263, max=959, avg=649.35, stdev=123.57 00:35:55.128 lat (usec): min=274, max=992, avg=678.93, stdev=128.11 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 371], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 537], 00:35:55.128 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 701], 00:35:55.128 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:35:55.128 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:35:55.128 | 99.99th=[ 963] 00:35:55.128 bw ( KiB/s): min= 4096, max= 4096, per=43.28%, avg=4096.00, stdev= 0.00, samples=1 00:35:55.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:55.128 lat (usec) : 500=7.63%, 750=35.56%, 1000=23.07% 00:35:55.128 lat (msec) : 2=33.74% 00:35:55.128 cpu : usr=1.10%, sys=4.00%, ctx=1158, majf=0, minf=1 00:35:55.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 issued rwts: total=512,641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:55.128 job2: (groupid=0, jobs=1): err= 0: pid=2327835: Wed Nov 20 10:53:27 2024 00:35:55.128 read: IOPS=14, BW=59.9KiB/s (61.4kB/s)(60.0KiB/1001msec) 00:35:55.128 slat (nsec): min=26487, max=27336, avg=26844.27, stdev=240.04 00:35:55.128 clat (usec): min=41689, max=42065, avg=41935.28, stdev=113.39 00:35:55.128 lat (usec): min=41715, max=42092, avg=41962.12, stdev=113.47 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:35:55.128 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:55.128 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:55.128 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:55.128 | 99.99th=[42206] 00:35:55.128 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:55.128 slat (nsec): min=9740, max=78432, avg=32126.17, stdev=8445.04 00:35:55.128 clat (usec): min=363, max=910, avg=684.78, stdev=108.92 00:35:55.128 lat (usec): min=373, max=945, avg=716.90, stdev=111.78 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 408], 5.00th=[ 494], 10.00th=[ 519], 20.00th=[ 594], 00:35:55.128 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 725], 00:35:55.128 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[ 807], 95.00th=[ 840], 00:35:55.128 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 914], 99.95th=[ 914], 00:35:55.128 | 99.99th=[ 914] 00:35:55.128 bw ( KiB/s): min= 4096, max= 4096, per=43.28%, avg=4096.00, stdev= 0.00, samples=1 00:35:55.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:55.128 lat (usec) : 500=6.83%, 750=58.82%, 1000=31.50% 00:35:55.128 lat (msec) : 50=2.85% 00:35:55.128 cpu : usr=0.60%, sys=1.80%, ctx=530, majf=0, minf=1 00:35:55.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:55.128 job3: (groupid=0, jobs=1): err= 0: pid=2327839: Wed Nov 20 10:53:27 2024 00:35:55.128 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:55.128 slat (nsec): min=26831, max=62004, avg=27740.34, stdev=2800.11 00:35:55.128 clat (usec): min=693, max=1354, avg=1005.00, stdev=86.96 00:35:55.128 lat (usec): min=721, max=1381, avg=1032.74, stdev=87.00 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 783], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 938], 00:35:55.128 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:35:55.128 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:35:55.128 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1352], 99.95th=[ 1352], 00:35:55.128 | 99.99th=[ 1352] 00:35:55.128 write: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec); 0 zone resets 00:35:55.128 slat (nsec): min=9137, max=75935, avg=29130.58, stdev=10099.72 00:35:55.128 clat (usec): min=169, max=973, avg=611.98, stdev=132.80 00:35:55.128 lat (usec): min=200, max=1007, avg=641.11, stdev=137.34 00:35:55.128 clat percentiles (usec): 00:35:55.128 | 1.00th=[ 318], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 494], 00:35:55.128 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:35:55.128 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 807], 00:35:55.128 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 971], 00:35:55.128 | 99.99th=[ 971] 00:35:55.128 bw ( KiB/s): min= 4096, max= 4096, per=43.28%, avg=4096.00, stdev= 0.00, samples=1 00:35:55.128 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:55.128 lat (usec) : 250=0.24%, 500=11.69%, 750=37.09%, 1000=28.81% 00:35:55.128 lat (msec) : 2=22.16% 00:35:55.128 cpu : usr=3.00%, sys=4.30%, ctx=1233, majf=0, minf=2 00:35:55.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.128 issued rwts: total=512,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:55.128 00:35:55.128 Run status group 0 (all jobs): 00:35:55.128 READ: bw=4647KiB/s (4758kB/s), 59.9KiB/s-2046KiB/s (61.4kB/s-2095kB/s), io=4684KiB (4796kB), run=1001-1008msec 00:35:55.128 WRITE: bw=9464KiB/s (9691kB/s), 2032KiB/s-2877KiB/s (2081kB/s-2946kB/s), io=9540KiB (9769kB), run=1001-1008msec 00:35:55.128 00:35:55.128 Disk stats (read/write): 00:35:55.128 nvme0n1: ios=153/512, merge=0/0, ticks=1438/311, in_queue=1749, util=96.49% 00:35:55.128 nvme0n2: ios=470/512, merge=0/0, ticks=1153/326, in_queue=1479, util=98.57% 00:35:55.128 nvme0n3: ios=58/512, merge=0/0, ticks=1386/339, in_queue=1725, util=97.47% 00:35:55.128 nvme0n4: ios=476/512, merge=0/0, ticks=464/249, in_queue=713, util=89.42% 00:35:55.129 10:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:55.129 [global] 00:35:55.129 thread=1 00:35:55.129 invalidate=1 00:35:55.129 rw=write 00:35:55.129 time_based=1 00:35:55.129 runtime=1 00:35:55.129 ioengine=libaio 00:35:55.129 direct=1 00:35:55.129 bs=4096 00:35:55.129 iodepth=128 00:35:55.129 norandommap=0 00:35:55.129 numjobs=1 00:35:55.129 00:35:55.129 verify_dump=1 00:35:55.129 verify_backlog=512 00:35:55.129 verify_state_save=0 00:35:55.129 do_verify=1 00:35:55.129 verify=crc32c-intel 00:35:55.129 [job0] 00:35:55.129 filename=/dev/nvme0n1 00:35:55.129 [job1] 00:35:55.129 filename=/dev/nvme0n2 00:35:55.129 [job2] 00:35:55.129 filename=/dev/nvme0n3 00:35:55.129 [job3] 00:35:55.129 filename=/dev/nvme0n4 00:35:55.129 Could not set queue depth (nvme0n1) 00:35:55.129 Could not set queue depth (nvme0n2) 00:35:55.129 Could not set queue depth (nvme0n3) 00:35:55.129 Could not set queue depth (nvme0n4) 00:35:55.390 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:55.390 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:55.390 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:55.390 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:55.390 fio-3.35 00:35:55.390 Starting 4 threads 00:35:56.773 00:35:56.773 job0: (groupid=0, jobs=1): err= 0: pid=2328349: Wed Nov 20 10:53:28 2024 00:35:56.774 read: IOPS=7836, BW=30.6MiB/s (32.1MB/s)(30.8MiB/1005msec) 00:35:56.774 slat (nsec): min=933, max=15078k, avg=62370.76, stdev=522695.58 00:35:56.774 clat (usec): min=2542, max=30062, avg=8546.53, stdev=3683.63 00:35:56.774 lat (usec): min=2550, max=30088, avg=8608.90, stdev=3719.98 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 4113], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6063], 00:35:56.774 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 7242], 60.00th=[ 8029], 00:35:56.774 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[12780], 95.00th=[15926], 00:35:56.774 | 99.00th=[21890], 99.50th=[24773], 99.90th=[27132], 99.95th=[28967], 00:35:56.774 | 99.99th=[30016] 00:35:56.774 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:35:56.774 slat (nsec): min=1626, max=11632k, avg=56959.07, stdev=495060.99 00:35:56.774 clat (usec): min=1130, max=22861, avg=7340.08, stdev=2861.67 00:35:56.774 lat (usec): min=1141, max=22869, avg=7397.04, stdev=2896.76 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 3032], 5.00th=[ 3949], 10.00th=[ 4228], 20.00th=[ 5276], 00:35:56.774 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6915], 00:35:56.774 | 70.00th=[ 7570], 80.00th=[ 9241], 90.00th=[11469], 95.00th=[12911], 00:35:56.774 | 99.00th=[18220], 99.50th=[18744], 99.90th=[21627], 99.95th=[22938], 00:35:56.774 | 99.99th=[22938] 00:35:56.774 bw ( KiB/s): min=28664, max=36872, per=31.25%, avg=32768.00, stdev=5803.93, samples=2 00:35:56.774 iops : min= 7166, max= 9218, avg=8192.00, stdev=1450.98, samples=2 00:35:56.774 lat (msec) : 2=0.03%, 4=3.16%, 10=76.53%, 20=19.08%, 50=1.19% 00:35:56.774 cpu : usr=6.27%, sys=7.47%, ctx=390, majf=0, minf=1 00:35:56.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:56.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:56.774 issued rwts: total=7876,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:56.774 job1: (groupid=0, jobs=1): err= 0: pid=2328351: Wed Nov 20 10:53:28 2024 00:35:56.774 read: IOPS=6874, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1005msec) 00:35:56.774 slat (nsec): min=889, max=19253k, avg=66700.57, stdev=574985.99 00:35:56.774 clat (usec): min=2366, max=27773, avg=9224.40, stdev=4469.98 00:35:56.774 lat (usec): min=2373, max=31897, avg=9291.10, stdev=4506.77 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 2540], 5.00th=[ 5080], 10.00th=[ 5538], 20.00th=[ 6128], 00:35:56.774 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8225], 00:35:56.774 | 70.00th=[ 9634], 80.00th=[12649], 90.00th=[16319], 95.00th=[19006], 00:35:56.774 | 99.00th=[25035], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:35:56.774 | 99.99th=[27657] 00:35:56.774 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:35:56.774 slat (nsec): min=1544, max=17654k, avg=60526.24, stdev=534103.48 00:35:56.774 clat (usec): min=620, max=40641, avg=8870.07, stdev=4379.89 00:35:56.774 lat (usec): min=827, max=40655, avg=8930.59, stdev=4405.14 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 3261], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 5932], 00:35:56.774 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7963], 00:35:56.774 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[14746], 95.00th=[18220], 00:35:56.774 | 99.00th=[20841], 99.50th=[21627], 99.90th=[33817], 99.95th=[38536], 00:35:56.774 | 99.99th=[40633] 00:35:56.774 bw ( KiB/s): min=20480, max=36864, per=27.35%, avg=28672.00, stdev=11585.24, samples=2 00:35:56.774 iops : min= 5120, max= 9216, avg=7168.00, stdev=2896.31, samples=2 00:35:56.774 lat (usec) : 750=0.01% 00:35:56.774 lat (msec) : 2=0.18%, 4=4.68%, 10=65.69%, 20=26.06%, 50=3.38% 00:35:56.774 cpu : usr=5.58%, sys=6.37%, ctx=393, majf=0, minf=1 00:35:56.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:56.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:56.774 issued rwts: total=6909,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:56.774 job2: (groupid=0, jobs=1): err= 0: pid=2328357: Wed Nov 20 10:53:28 2024 00:35:56.774 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:35:56.774 slat (nsec): min=915, max=15204k, avg=85399.39, stdev=701854.50 00:35:56.774 clat (usec): min=2755, max=31009, avg=10993.24, stdev=4617.85 00:35:56.774 lat (usec): min=2760, max=31033, avg=11078.64, stdev=4666.77 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 4293], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[ 8029], 00:35:56.774 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:35:56.774 | 70.00th=[11994], 80.00th=[14353], 90.00th=[17957], 95.00th=[19530], 00:35:56.774 | 99.00th=[28705], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:35:56.774 | 99.99th=[31065] 00:35:56.774 write: IOPS=6092, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1006msec); 0 zone resets 00:35:56.774 slat (nsec): min=1613, max=14391k, avg=65987.75, stdev=553091.17 00:35:56.774 clat (usec): min=1052, max=63806, avg=10735.26, stdev=8349.66 00:35:56.774 lat (usec): min=1172, max=63808, avg=10801.25, stdev=8395.92 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 1926], 5.00th=[ 2868], 10.00th=[ 4113], 20.00th=[ 5211], 00:35:56.774 | 30.00th=[ 6587], 40.00th=[ 7308], 50.00th=[ 7963], 60.00th=[ 9241], 00:35:56.774 | 70.00th=[12256], 80.00th=[14877], 90.00th=[17433], 95.00th=[30016], 00:35:56.774 | 99.00th=[44303], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:35:56.774 | 99.99th=[63701] 00:35:56.774 bw ( KiB/s): min=21448, max=26560, per=22.89%, avg=24004.00, stdev=3614.73, samples=2 00:35:56.774 iops : min= 5362, max= 6640, avg=6001.00, stdev=903.68, samples=2 00:35:56.774 lat (msec) : 2=0.71%, 4=4.65%, 10=55.41%, 20=33.48%, 50=5.56% 00:35:56.774 lat (msec) : 100=0.19% 00:35:56.774 cpu : usr=4.28%, sys=5.97%, ctx=367, majf=0, minf=1 00:35:56.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:56.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:56.774 issued rwts: total=5632,6129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:56.774 job3: (groupid=0, jobs=1): err= 0: pid=2328358: Wed Nov 20 10:53:28 2024 00:35:56.774 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:35:56.774 slat (nsec): min=997, max=13251k, avg=84597.67, stdev=666701.05 00:35:56.774 clat (usec): min=3599, max=46180, avg=10555.08, stdev=4338.65 00:35:56.774 lat (usec): min=3608, max=46189, avg=10639.68, stdev=4406.19 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 5014], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 7898], 00:35:56.774 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:35:56.774 | 70.00th=[11207], 80.00th=[12911], 90.00th=[15664], 95.00th=[19006], 00:35:56.774 | 99.00th=[25035], 99.50th=[31589], 99.90th=[46400], 99.95th=[46400], 00:35:56.774 | 99.99th=[46400] 00:35:56.774 write: IOPS=4855, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1005msec); 0 zone resets 00:35:56.774 slat (nsec): min=1665, max=15089k, avg=119397.26, stdev=804581.99 00:35:56.774 clat (usec): min=1145, max=85055, avg=16166.09, stdev=16976.27 00:35:56.774 lat (usec): min=1187, max=85062, avg=16285.48, stdev=17091.60 00:35:56.774 clat percentiles (usec): 00:35:56.774 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5276], 20.00th=[ 6849], 00:35:56.774 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 9241], 60.00th=[11469], 00:35:56.774 | 70.00th=[15008], 80.00th=[19530], 90.00th=[40109], 95.00th=[65799], 00:35:56.774 | 99.00th=[76022], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:35:56.774 | 99.99th=[85459] 00:35:56.774 bw ( KiB/s): min=12272, max=25752, per=18.13%, avg=19012.00, stdev=9531.80, samples=2 00:35:56.774 iops : min= 3068, max= 6438, avg=4753.00, stdev=2382.95, samples=2 00:35:56.774 lat (msec) : 2=0.01%, 4=0.39%, 10=55.10%, 20=33.90%, 50=6.41% 00:35:56.774 lat (msec) : 100=4.19% 00:35:56.774 cpu : usr=4.18%, sys=4.98%, ctx=274, majf=0, minf=1 00:35:56.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:56.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:56.774 issued rwts: total=4608,4880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:56.774 00:35:56.774 Run status group 0 (all jobs): 00:35:56.774 READ: bw=97.2MiB/s (102MB/s), 17.9MiB/s-30.6MiB/s (18.8MB/s-32.1MB/s), io=97.8MiB (103MB), run=1005-1006msec 00:35:56.774 WRITE: bw=102MiB/s (107MB/s), 19.0MiB/s-31.8MiB/s (19.9MB/s-33.4MB/s), io=103MiB (108MB), run=1005-1006msec 00:35:56.774 00:35:56.774 Disk stats (read/write): 00:35:56.774 nvme0n1: ios=7218/7549, merge=0/0, ticks=52967/49963, in_queue=102930, util=88.48% 00:35:56.774 nvme0n2: ios=6192/6505, merge=0/0, ticks=43651/43276, in_queue=86927, util=98.58% 00:35:56.774 nvme0n3: ios=4629/4787, merge=0/0, ticks=47295/52482, in_queue=99777, util=89.30% 00:35:56.774 nvme0n4: ios=3460/3584, merge=0/0, ticks=36753/67451, in_queue=104204, util=89.63% 00:35:56.774 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:56.774 [global] 00:35:56.774 thread=1 00:35:56.774 invalidate=1 00:35:56.774 rw=randwrite 00:35:56.774 time_based=1 00:35:56.774 runtime=1 00:35:56.774 ioengine=libaio 00:35:56.774 direct=1 00:35:56.774 bs=4096 00:35:56.774 iodepth=128 00:35:56.774 norandommap=0 00:35:56.774 numjobs=1 00:35:56.774 00:35:56.774 verify_dump=1 00:35:56.774 verify_backlog=512 00:35:56.774 verify_state_save=0 00:35:56.774 do_verify=1 00:35:56.774 verify=crc32c-intel 00:35:56.774 [job0] 00:35:56.774 filename=/dev/nvme0n1 00:35:56.774 [job1] 00:35:56.774 filename=/dev/nvme0n2 00:35:56.774 [job2] 00:35:56.774 filename=/dev/nvme0n3 00:35:56.774 [job3] 00:35:56.774 filename=/dev/nvme0n4 00:35:56.774 Could not set queue depth (nvme0n1) 00:35:56.774 Could not set queue depth (nvme0n2) 00:35:56.774 Could not set queue depth (nvme0n3) 00:35:56.774 Could not set queue depth (nvme0n4) 00:35:57.034 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:57.034 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:57.034 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:57.034 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:57.034 fio-3.35 00:35:57.034 Starting 4 threads 00:35:58.418 00:35:58.418 job0: (groupid=0, jobs=1): err= 0: pid=2328876: Wed Nov 20 10:53:30 2024 00:35:58.418 read: IOPS=4689, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1006msec) 00:35:58.418 slat (nsec): min=942, max=8651.2k, avg=105647.92, stdev=674219.08 00:35:58.418 clat (usec): min=3721, max=26720, avg=13812.67, stdev=3819.36 00:35:58.418 lat (usec): min=3727, max=26726, avg=13918.32, stdev=3875.10 00:35:58.418 clat percentiles (usec): 00:35:58.418 | 1.00th=[ 5473], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[10421], 00:35:58.418 | 30.00th=[12125], 40.00th=[13042], 50.00th=[14091], 60.00th=[14746], 00:35:58.418 | 70.00th=[15664], 80.00th=[17171], 90.00th=[18744], 95.00th=[19268], 00:35:58.418 | 99.00th=[23462], 99.50th=[23987], 99.90th=[25822], 99.95th=[26608], 00:35:58.418 | 99.99th=[26608] 00:35:58.418 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:35:58.418 slat (nsec): min=1633, max=30394k, avg=91164.30, stdev=708471.50 00:35:58.418 clat (usec): min=1170, max=46427, avg=12082.53, stdev=6401.34 00:35:58.418 lat (usec): min=1177, max=46436, avg=12173.70, stdev=6442.39 00:35:58.418 clat percentiles (usec): 00:35:58.418 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7832], 00:35:58.418 | 30.00th=[ 8586], 40.00th=[10552], 50.00th=[11469], 60.00th=[11994], 00:35:58.418 | 70.00th=[13042], 80.00th=[13829], 90.00th=[16581], 95.00th=[22414], 00:35:58.418 | 99.00th=[41681], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:35:58.418 | 99.99th=[46400] 00:35:58.418 bw ( KiB/s): min=20344, max=20480, per=20.77%, avg=20412.00, stdev=96.17, samples=2 00:35:58.418 iops : min= 5086, max= 5120, avg=5103.00, stdev=24.04, samples=2 00:35:58.418 lat (msec) : 2=0.09%, 4=0.32%, 10=27.69%, 20=66.49%, 50=5.42% 00:35:58.418 cpu : usr=4.18%, sys=4.88%, ctx=316, majf=0, minf=1 00:35:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:58.418 issued rwts: total=4718,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:58.418 job1: (groupid=0, jobs=1): err= 0: pid=2328877: Wed Nov 20 10:53:30 2024 00:35:58.418 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:35:58.418 slat (nsec): min=879, max=7578.8k, avg=60085.86, stdev=425839.93 00:35:58.418 clat (usec): min=1471, max=57091, avg=8485.44, stdev=5415.27 00:35:58.418 lat (usec): min=1478, max=57098, avg=8545.53, stdev=5425.76 00:35:58.418 clat percentiles (usec): 00:35:58.418 | 1.00th=[ 2573], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6259], 00:35:58.418 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7832], 00:35:58.418 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10683], 95.00th=[16188], 00:35:58.418 | 99.00th=[38536], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:35:58.418 | 99.99th=[56886] 00:35:58.418 write: IOPS=7588, BW=29.6MiB/s (31.1MB/s)(29.8MiB/1005msec); 0 zone resets 00:35:58.418 slat (nsec): min=1506, max=13476k, avg=63270.92, stdev=426372.85 00:35:58.418 clat (usec): min=547, max=36969, avg=8720.67, stdev=5777.24 00:35:58.418 lat (usec): min=710, max=36976, avg=8783.94, stdev=5810.49 00:35:58.418 clat percentiles (usec): 00:35:58.418 | 1.00th=[ 1483], 5.00th=[ 3720], 10.00th=[ 4555], 20.00th=[ 5800], 00:35:58.418 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7570], 00:35:58.418 | 70.00th=[ 8356], 80.00th=[10290], 90.00th=[13829], 95.00th=[23987], 00:35:58.418 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[36963], 00:35:58.418 | 99.99th=[36963] 00:35:58.418 bw ( KiB/s): min=23120, max=36864, per=30.52%, avg=29992.00, stdev=9718.48, samples=2 00:35:58.418 iops : min= 5780, max= 9216, avg=7498.00, stdev=2429.62, samples=2 00:35:58.418 lat (usec) : 750=0.05% 00:35:58.418 lat (msec) : 2=0.99%, 4=4.03%, 10=77.77%, 20=13.01%, 50=3.70% 00:35:58.418 lat (msec) : 100=0.45% 00:35:58.418 cpu : usr=4.38%, sys=6.18%, ctx=579, majf=0, minf=2 00:35:58.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:58.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:58.418 issued rwts: total=7168,7626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:58.418 job2: (groupid=0, jobs=1): err= 0: pid=2328878: Wed Nov 20 10:53:30 2024 00:35:58.418 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:35:58.418 slat (nsec): min=927, max=16013k, avg=83428.80, stdev=511644.99 00:35:58.418 clat (usec): min=4303, max=47218, avg=10776.24, stdev=5008.84 00:35:58.418 lat (usec): min=4313, max=48716, avg=10859.67, stdev=5041.69 00:35:58.418 clat percentiles (usec): 00:35:58.418 | 1.00th=[ 5145], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7963], 00:35:58.418 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:35:58.418 | 70.00th=[10421], 80.00th=[12780], 90.00th=[17433], 95.00th=[19006], 00:35:58.418 | 99.00th=[30802], 99.50th=[30802], 99.90th=[47449], 99.95th=[47449], 00:35:58.418 | 99.99th=[47449] 00:35:58.418 write: IOPS=6075, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1002msec); 0 zone resets 00:35:58.418 slat (nsec): min=1524, max=7437.3k, avg=83090.60, stdev=423750.24 00:35:58.418 clat (usec): min=1307, max=35479, avg=10817.61, stdev=5110.12 00:35:58.419 lat (usec): min=1315, max=35489, avg=10900.70, stdev=5145.66 00:35:58.419 clat percentiles (usec): 00:35:58.419 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 7111], 20.00th=[ 7898], 00:35:58.419 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:35:58.419 | 70.00th=[11338], 80.00th=[13304], 90.00th=[16712], 95.00th=[21103], 00:35:58.419 | 99.00th=[32113], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:35:58.419 | 99.99th=[35390] 00:35:58.419 bw ( KiB/s): min=23112, max=24576, per=24.26%, avg=23844.00, stdev=1035.20, samples=2 00:35:58.419 iops : min= 5778, max= 6144, avg=5961.00, stdev=258.80, samples=2 00:35:58.419 lat (msec) : 2=0.15%, 10=66.12%, 20=28.79%, 50=4.95% 00:35:58.419 cpu : usr=2.20%, sys=5.19%, ctx=607, majf=0, minf=1 00:35:58.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:58.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:58.419 issued rwts: total=5632,6088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:58.419 job3: (groupid=0, jobs=1): err= 0: pid=2328879: Wed Nov 20 10:53:30 2024 00:35:58.419 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:35:58.419 slat (nsec): min=933, max=7873.8k, avg=91419.91, stdev=561849.19 00:35:58.419 clat (usec): min=3937, max=26214, avg=11309.46, stdev=4453.85 00:35:58.419 lat (usec): min=3944, max=26235, avg=11400.88, stdev=4484.96 00:35:58.419 clat percentiles (usec): 00:35:58.419 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7898], 00:35:58.419 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[11338], 00:35:58.419 | 70.00th=[12649], 80.00th=[13960], 90.00th=[17957], 95.00th=[21890], 00:35:58.419 | 99.00th=[24249], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:35:58.419 | 99.99th=[26346] 00:35:58.419 write: IOPS=5864, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1003msec); 0 zone resets 00:35:58.419 slat (nsec): min=1537, max=6834.2k, avg=77905.54, stdev=478090.04 00:35:58.419 clat (usec): min=2060, max=23632, avg=10740.02, stdev=3913.60 00:35:58.419 lat (usec): min=2069, max=23654, avg=10817.92, stdev=3941.56 00:35:58.419 clat percentiles (usec): 00:35:58.419 | 1.00th=[ 4424], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7373], 00:35:58.419 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10683], 00:35:58.419 | 70.00th=[12649], 80.00th=[14877], 90.00th=[16909], 95.00th=[17957], 00:35:58.419 | 99.00th=[19268], 99.50th=[20841], 99.90th=[22676], 99.95th=[23200], 00:35:58.419 | 99.99th=[23725] 00:35:58.419 bw ( KiB/s): min=22720, max=23320, per=23.42%, avg=23020.00, stdev=424.26, samples=2 00:35:58.419 iops : min= 5680, max= 5830, avg=5755.00, stdev=106.07, samples=2 00:35:58.419 lat (msec) : 4=0.40%, 10=53.62%, 20=42.49%, 50=3.49% 00:35:58.419 cpu : usr=3.69%, sys=4.99%, ctx=491, majf=0, minf=2 00:35:58.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:58.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:58.419 issued rwts: total=5632,5882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:58.419 00:35:58.419 Run status group 0 (all jobs): 00:35:58.419 READ: bw=89.9MiB/s (94.3MB/s), 18.3MiB/s-27.9MiB/s (19.2MB/s-29.2MB/s), io=90.4MiB (94.8MB), run=1002-1006msec 00:35:58.419 WRITE: bw=96.0MiB/s (101MB/s), 19.9MiB/s-29.6MiB/s (20.8MB/s-31.1MB/s), io=96.5MiB (101MB), run=1002-1006msec 00:35:58.419 00:35:58.419 Disk stats (read/write): 00:35:58.419 nvme0n1: ios=4087/4096, merge=0/0, ticks=22798/18106, in_queue=40904, util=96.79% 00:35:58.419 nvme0n2: ios=6182/6494, merge=0/0, ticks=34874/37199, in_queue=72073, util=87.35% 00:35:58.419 nvme0n3: ios=4608/4695, merge=0/0, ticks=17447/17610, in_queue=35057, util=88.38% 00:35:58.419 nvme0n4: ios=4777/5120, merge=0/0, ticks=19499/18126, in_queue=37625, util=88.77% 00:35:58.419 10:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:58.419 10:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2329209 00:35:58.419 10:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:58.419 10:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:58.419 [global] 00:35:58.419 thread=1 00:35:58.419 invalidate=1 00:35:58.419 rw=read 00:35:58.419 time_based=1 00:35:58.419 runtime=10 00:35:58.419 ioengine=libaio 00:35:58.419 direct=1 00:35:58.419 bs=4096 00:35:58.419 iodepth=1 00:35:58.419 norandommap=1 00:35:58.419 numjobs=1 00:35:58.419 00:35:58.419 [job0] 00:35:58.419 filename=/dev/nvme0n1 00:35:58.419 [job1] 00:35:58.419 filename=/dev/nvme0n2 00:35:58.419 [job2] 00:35:58.419 filename=/dev/nvme0n3 00:35:58.419 [job3] 00:35:58.419 filename=/dev/nvme0n4 00:35:58.419 Could not set queue depth (nvme0n1) 00:35:58.419 Could not set queue depth (nvme0n2) 00:35:58.419 Could not set queue depth (nvme0n3) 00:35:58.419 Could not set queue depth (nvme0n4) 00:35:58.680 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:58.680 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:58.680 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:58.680 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:58.680 fio-3.35 00:35:58.680 Starting 4 threads 00:36:01.222 10:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:01.482 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10579968, buflen=4096 00:36:01.482 fio: pid=2329406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:01.482 10:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:01.743 10:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:01.743 10:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:01.743 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14696448, buflen=4096 00:36:01.743 fio: pid=2329405, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:01.743 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:01.743 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:01.743 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=299008, buflen=4096 00:36:01.743 fio: pid=2329403, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:02.003 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:02.003 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:02.003 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=327680, buflen=4096 00:36:02.003 fio: pid=2329404, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:02.003 00:36:02.003 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2329403: Wed Nov 20 10:53:34 2024 00:36:02.003 read: IOPS=24, BW=98.1KiB/s (100kB/s)(292KiB/2977msec) 00:36:02.003 slat (usec): min=24, max=13568, avg=210.04, stdev=1574.25 00:36:02.003 clat (usec): min=1116, max=42064, avg=40252.94, stdev=8158.45 00:36:02.003 lat (usec): min=1144, max=54975, avg=40465.51, stdev=8332.28 00:36:02.003 clat percentiles (usec): 00:36:02.003 | 1.00th=[ 1123], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:36:02.003 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:02.003 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:02.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:02.003 | 99.99th=[42206] 00:36:02.003 bw ( KiB/s): min= 96, max= 112, per=1.24%, avg=99.20, stdev= 7.16, samples=5 00:36:02.004 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:36:02.004 lat (msec) : 2=4.05%, 50=94.59% 00:36:02.004 cpu : usr=0.00%, sys=0.10%, ctx=77, majf=0, minf=1 00:36:02.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:02.004 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2329404: Wed Nov 20 10:53:34 2024 00:36:02.004 read: IOPS=25, BW=101KiB/s (103kB/s)(320KiB/3180msec) 00:36:02.004 slat (usec): min=25, max=8638, avg=134.54, stdev=956.73 00:36:02.004 clat (usec): min=978, max=45033, avg=39274.45, stdev=9947.04 00:36:02.004 lat (usec): min=1003, max=50106, avg=39410.34, stdev=10017.22 00:36:02.004 clat percentiles (usec): 00:36:02.004 | 1.00th=[ 979], 5.00th=[ 1090], 10.00th=[40633], 20.00th=[41157], 00:36:02.004 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:36:02.004 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:02.004 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:36:02.004 | 99.99th=[44827] 00:36:02.004 bw ( KiB/s): min= 96, max= 128, per=1.27%, avg=101.33, stdev=13.06, samples=6 00:36:02.004 iops : min= 24, max= 32, avg=25.33, stdev= 3.27, samples=6 00:36:02.004 lat (usec) : 1000=2.47% 00:36:02.004 lat (msec) : 2=3.70%, 50=92.59% 00:36:02.004 cpu : usr=0.00%, sys=0.16%, ctx=84, majf=0, minf=2 00:36:02.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:02.004 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2329405: Wed Nov 20 10:53:34 2024 00:36:02.004 read: IOPS=1288, BW=5151KiB/s (5275kB/s)(14.0MiB/2786msec) 00:36:02.004 slat (nsec): min=6262, max=63288, avg=24699.08, stdev=7672.73 00:36:02.004 clat (usec): min=332, max=1042, avg=740.18, stdev=106.30 00:36:02.004 lat (usec): min=340, max=1068, avg=764.88, stdev=108.58 00:36:02.004 clat percentiles (usec): 00:36:02.004 | 1.00th=[ 445], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 652], 00:36:02.004 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 775], 00:36:02.004 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:36:02.004 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 988], 99.95th=[ 996], 00:36:02.004 | 99.99th=[ 1045] 00:36:02.004 bw ( KiB/s): min= 5152, max= 5280, per=65.41%, avg=5203.20, stdev=50.79, samples=5 00:36:02.004 iops : min= 1288, max= 1320, avg=1300.80, stdev=12.70, samples=5 00:36:02.004 lat (usec) : 500=2.51%, 750=47.76%, 1000=49.68% 00:36:02.004 lat (msec) : 2=0.03% 00:36:02.004 cpu : usr=1.80%, sys=5.03%, ctx=3590, majf=0, minf=2 00:36:02.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 issued rwts: total=3589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:02.004 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2329406: Wed Nov 20 10:53:34 2024 00:36:02.004 read: IOPS=991, BW=3963KiB/s (4058kB/s)(10.1MiB/2607msec) 00:36:02.004 slat (nsec): min=6743, max=63895, avg=27524.23, stdev=2010.10 00:36:02.004 clat (usec): min=632, max=1989, avg=965.11, stdev=71.97 00:36:02.004 lat (usec): min=639, max=2017, avg=992.64, stdev=72.13 00:36:02.004 clat percentiles (usec): 00:36:02.004 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 922], 00:36:02.004 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:36:02.004 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:36:02.004 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1221], 99.95th=[ 1811], 00:36:02.004 | 99.99th=[ 1991] 00:36:02.004 bw ( KiB/s): min= 4000, max= 4040, per=50.44%, avg=4012.80, stdev=16.59, samples=5 00:36:02.004 iops : min= 1000, max= 1010, avg=1003.20, stdev= 4.15, samples=5 00:36:02.004 lat (usec) : 750=0.89%, 1000=70.36% 00:36:02.004 lat (msec) : 2=28.72% 00:36:02.004 cpu : usr=1.73%, sys=4.18%, ctx=2584, majf=0, minf=2 00:36:02.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:02.004 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:02.004 00:36:02.004 Run status group 0 (all jobs): 00:36:02.004 READ: bw=7955KiB/s (8146kB/s), 98.1KiB/s-5151KiB/s (100kB/s-5275kB/s), io=24.7MiB (25.9MB), run=2607-3180msec 00:36:02.004 00:36:02.004 Disk stats (read/write): 00:36:02.004 nvme0n1: ios=70/0, merge=0/0, ticks=2815/0, in_queue=2815, util=94.32% 00:36:02.004 nvme0n2: ios=78/0, merge=0/0, ticks=3061/0, in_queue=3061, util=95.41% 00:36:02.004 nvme0n3: ios=3358/0, merge=0/0, ticks=2151/0, in_queue=2151, util=95.99% 00:36:02.004 nvme0n4: ios=2583/0, merge=0/0, ticks=2533/0, in_queue=2533, util=96.46% 00:36:02.264 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:02.264 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:02.525 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:02.525 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:02.525 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:02.525 10:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:02.787 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:02.787 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:03.048 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:03.048 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2329209 00:36:03.048 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:03.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:03.049 nvmf hotplug test: fio failed as expected 00:36:03.049 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.310 rmmod nvme_tcp 00:36:03.310 rmmod nvme_fabrics 00:36:03.310 rmmod nvme_keyring 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2325822 ']' 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2325822 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2325822 ']' 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2325822 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2325822 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2325822' 00:36:03.310 killing process with pid 2325822 00:36:03.310 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2325822 00:36:03.311 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2325822 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.573 10:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.485 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.485 00:36:05.485 real 0m28.303s 00:36:05.485 user 2m19.603s 00:36:05.485 sys 0m11.951s 00:36:05.485 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.485 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.485 ************************************ 00:36:05.485 END TEST nvmf_fio_target 00:36:05.485 ************************************ 00:36:05.485 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:05.748 ************************************ 00:36:05.748 START TEST nvmf_bdevio 00:36:05.748 ************************************ 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:05.748 * Looking for test storage... 00:36:05.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:05.748 10:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.748 --rc genhtml_branch_coverage=1 00:36:05.748 --rc genhtml_function_coverage=1 00:36:05.748 --rc genhtml_legend=1 00:36:05.748 --rc geninfo_all_blocks=1 00:36:05.748 --rc geninfo_unexecuted_blocks=1 00:36:05.748 00:36:05.748 ' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.748 --rc genhtml_branch_coverage=1 00:36:05.748 --rc genhtml_function_coverage=1 00:36:05.748 --rc genhtml_legend=1 00:36:05.748 --rc geninfo_all_blocks=1 00:36:05.748 --rc geninfo_unexecuted_blocks=1 00:36:05.748 00:36:05.748 ' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.748 --rc genhtml_branch_coverage=1 00:36:05.748 --rc genhtml_function_coverage=1 00:36:05.748 --rc genhtml_legend=1 00:36:05.748 --rc geninfo_all_blocks=1 00:36:05.748 --rc geninfo_unexecuted_blocks=1 00:36:05.748 00:36:05.748 ' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.748 --rc genhtml_branch_coverage=1 00:36:05.748 --rc genhtml_function_coverage=1 00:36:05.748 --rc genhtml_legend=1 00:36:05.748 --rc geninfo_all_blocks=1 00:36:05.748 --rc geninfo_unexecuted_blocks=1 00:36:05.748 00:36:05.748 ' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.748 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.010 10:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:14.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:14.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.149 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:14.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:14.150 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:36:14.150 00:36:14.150 --- 10.0.0.2 ping statistics --- 00:36:14.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.150 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:36:14.150 00:36:14.150 --- 10.0.0.1 ping statistics --- 00:36:14.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.150 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2334425 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2334425 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2334425 ']' 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.150 10:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.150 [2024-11-20 10:53:45.725093] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:14.150 [2024-11-20 10:53:45.726221] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:36:14.150 [2024-11-20 10:53:45.726273] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.150 [2024-11-20 10:53:45.810773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:14.150 [2024-11-20 10:53:45.863623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.150 [2024-11-20 10:53:45.863673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.150 [2024-11-20 10:53:45.863682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.150 [2024-11-20 10:53:45.863689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.150 [2024-11-20 10:53:45.863695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.150 [2024-11-20 10:53:45.866121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:14.150 [2024-11-20 10:53:45.866282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:14.150 [2024-11-20 10:53:45.866617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:14.150 [2024-11-20 10:53:45.866620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:14.150 [2024-11-20 10:53:45.942802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:14.150 [2024-11-20 10:53:45.944075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:14.150 [2024-11-20 10:53:45.944097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:14.150 [2024-11-20 10:53:45.944571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:14.150 [2024-11-20 10:53:45.944616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.411 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.412 [2024-11-20 10:53:46.583664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.412 Malloc0 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:14.412 [2024-11-20 10:53:46.675973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.412 { 00:36:14.412 "params": { 00:36:14.412 "name": "Nvme$subsystem", 00:36:14.412 "trtype": "$TEST_TRANSPORT", 00:36:14.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.412 "adrfam": "ipv4", 00:36:14.412 "trsvcid": "$NVMF_PORT", 00:36:14.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.412 "hdgst": ${hdgst:-false}, 00:36:14.412 "ddgst": ${ddgst:-false} 00:36:14.412 }, 00:36:14.412 "method": "bdev_nvme_attach_controller" 00:36:14.412 } 00:36:14.412 EOF 00:36:14.412 )") 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:14.412 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:14.412 "params": { 00:36:14.412 "name": "Nvme1", 00:36:14.412 "trtype": "tcp", 00:36:14.412 "traddr": "10.0.0.2", 00:36:14.412 "adrfam": "ipv4", 00:36:14.412 "trsvcid": "4420", 00:36:14.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.412 "hdgst": false, 00:36:14.412 "ddgst": false 00:36:14.412 }, 00:36:14.412 "method": "bdev_nvme_attach_controller" 00:36:14.412 }' 00:36:14.412 [2024-11-20 10:53:46.733320] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:36:14.412 [2024-11-20 10:53:46.733397] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334715 ] 00:36:14.672 [2024-11-20 10:53:46.827354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:14.672 [2024-11-20 10:53:46.883324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.672 [2024-11-20 10:53:46.883477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.672 [2024-11-20 10:53:46.883477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.933 I/O targets: 00:36:14.933 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:14.933 00:36:14.933 00:36:14.933 CUnit - A unit testing framework for C - Version 2.1-3 00:36:14.933 http://cunit.sourceforge.net/ 00:36:14.933 00:36:14.933 00:36:14.933 Suite: bdevio tests on: Nvme1n1 00:36:14.933 Test: blockdev write read block ...passed 00:36:14.933 Test: blockdev write zeroes read block ...passed 00:36:14.933 Test: blockdev write zeroes read no split ...passed 00:36:14.933 Test: blockdev write zeroes read split ...passed 00:36:14.933 Test: blockdev write zeroes read split partial ...passed 00:36:14.933 Test: blockdev reset ...[2024-11-20 10:53:47.213305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:14.933 [2024-11-20 10:53:47.213407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2970 (9): Bad file descriptor 00:36:14.933 [2024-11-20 10:53:47.268143] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:14.933 passed 00:36:15.194 Test: blockdev write read 8 blocks ...passed 00:36:15.194 Test: blockdev write read size > 128k ...passed 00:36:15.194 Test: blockdev write read invalid size ...passed 00:36:15.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:15.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:15.194 Test: blockdev write read max offset ...passed 00:36:15.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:15.194 Test: blockdev writev readv 8 blocks ...passed 00:36:15.194 Test: blockdev writev readv 30 x 1block ...passed 00:36:15.194 Test: blockdev writev readv block ...passed 00:36:15.194 Test: blockdev writev readv size > 128k ...passed 00:36:15.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:15.194 Test: blockdev comparev and writev ...[2024-11-20 10:53:47.493074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.493122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.493146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.493156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.493830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.493842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.493856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.493863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.494573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.494599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.494607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.495253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.495265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:15.194 [2024-11-20 10:53:47.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:15.194 [2024-11-20 10:53:47.495287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:15.194 passed 00:36:15.456 Test: blockdev nvme passthru rw ...passed 00:36:15.456 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:53:47.579843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:15.456 [2024-11-20 10:53:47.579858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:15.456 [2024-11-20 10:53:47.580241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:15.456 [2024-11-20 10:53:47.580253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:15.456 [2024-11-20 10:53:47.580650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:15.456 [2024-11-20 10:53:47.580661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:15.456 [2024-11-20 10:53:47.581055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:15.456 [2024-11-20 10:53:47.581067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:15.456 passed 00:36:15.456 Test: blockdev nvme admin passthru ...passed 00:36:15.456 Test: blockdev copy ...passed 00:36:15.456 00:36:15.456 Run Summary: Type Total Ran Passed Failed Inactive 00:36:15.456 suites 1 1 n/a 0 0 00:36:15.456 tests 23 23 23 0 0 00:36:15.456 asserts 152 152 152 0 n/a 00:36:15.456 00:36:15.456 Elapsed time = 1.110 seconds 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:15.456 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:15.456 rmmod nvme_tcp 00:36:15.456 rmmod nvme_fabrics 00:36:15.456 rmmod nvme_keyring 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2334425 ']' 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2334425 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2334425 ']' 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2334425 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334425 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334425' 00:36:15.718 killing process with pid 2334425 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2334425 00:36:15.718 10:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2334425 00:36:15.718 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.979 10:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.891 00:36:17.891 real 0m12.283s 00:36:17.891 user 0m9.439s 00:36:17.891 sys 0m6.559s 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:17.891 ************************************ 00:36:17.891 END TEST nvmf_bdevio 00:36:17.891 ************************************ 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:17.891 00:36:17.891 real 5m0.588s 00:36:17.891 user 10m23.015s 00:36:17.891 sys 2m5.350s 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.891 10:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:17.891 ************************************ 00:36:17.891 END TEST nvmf_target_core_interrupt_mode 00:36:17.891 ************************************ 00:36:17.891 10:53:50 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:17.891 10:53:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:17.891 10:53:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.891 10:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.151 ************************************ 00:36:18.151 START TEST nvmf_interrupt 00:36:18.151 ************************************ 00:36:18.151 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:18.151 * Looking for test storage... 00:36:18.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:18.151 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:18.151 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:18.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.152 --rc genhtml_branch_coverage=1 00:36:18.152 --rc genhtml_function_coverage=1 00:36:18.152 --rc genhtml_legend=1 00:36:18.152 --rc geninfo_all_blocks=1 00:36:18.152 --rc geninfo_unexecuted_blocks=1 00:36:18.152 00:36:18.152 ' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:18.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.152 --rc genhtml_branch_coverage=1 00:36:18.152 --rc genhtml_function_coverage=1 00:36:18.152 --rc genhtml_legend=1 00:36:18.152 --rc geninfo_all_blocks=1 00:36:18.152 --rc geninfo_unexecuted_blocks=1 00:36:18.152 00:36:18.152 ' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:18.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.152 --rc genhtml_branch_coverage=1 00:36:18.152 --rc genhtml_function_coverage=1 00:36:18.152 --rc genhtml_legend=1 00:36:18.152 --rc geninfo_all_blocks=1 00:36:18.152 --rc geninfo_unexecuted_blocks=1 00:36:18.152 00:36:18.152 ' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:18.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.152 --rc genhtml_branch_coverage=1 00:36:18.152 --rc genhtml_function_coverage=1 00:36:18.152 --rc genhtml_legend=1 00:36:18.152 --rc geninfo_all_blocks=1 00:36:18.152 --rc geninfo_unexecuted_blocks=1 00:36:18.152 00:36:18.152 ' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:18.152 10:53:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:18.153 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.413 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:18.413 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:18.413 10:53:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:18.413 10:53:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:24.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:24.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:24.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.995 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:25.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.256 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.257 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:25.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:36:25.518 00:36:25.518 --- 10.0.0.2 ping statistics --- 00:36:25.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.518 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:36:25.518 00:36:25.518 --- 10.0.0.1 ping statistics --- 00:36:25.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.518 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2339129 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2339129 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2339129 ']' 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.518 10:53:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:25.518 [2024-11-20 10:53:57.765303] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:25.518 [2024-11-20 10:53:57.766269] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:36:25.518 [2024-11-20 10:53:57.766308] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.518 [2024-11-20 10:53:57.859645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:25.779 [2024-11-20 10:53:57.895406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:25.780 [2024-11-20 10:53:57.895437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:25.780 [2024-11-20 10:53:57.895445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:25.780 [2024-11-20 10:53:57.895451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:25.780 [2024-11-20 10:53:57.895457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:25.780 [2024-11-20 10:53:57.896608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.780 [2024-11-20 10:53:57.896611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.780 [2024-11-20 10:53:57.951997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:25.780 [2024-11-20 10:53:57.952439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:25.780 [2024-11-20 10:53:57.952793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:26.351 5000+0 records in 00:36:26.351 5000+0 records out 00:36:26.351 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190342 s, 538 MB/s 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 AIO0 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.351 [2024-11-20 10:53:58.657529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:26.351 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:26.352 [2024-11-20 10:53:58.697976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2339129 0 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 0 idle 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:26.352 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339129 root 20 0 128.2g 42624 32256 S 6.7 0.0 0:00.28 reactor_0' 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339129 root 20 0 128.2g 42624 32256 S 6.7 0.0 0:00.28 reactor_0 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2339129 1 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 1 idle 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:26.613 10:53:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339135 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339135 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2339315 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2339129 0 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2339129 0 busy 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:26.874 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339129 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.28 reactor_0' 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339129 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.28 reactor_0 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:27.135 10:53:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:36:28.079 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:36:28.079 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:28.079 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:28.079 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:28.079 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339129 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.65 reactor_0' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339129 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.65 reactor_0 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2339129 1 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2339129 1 busy 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339135 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:01.37 reactor_1' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339135 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:01.37 reactor_1 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:28.341 10:54:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2339315 00:36:38.403 Initializing NVMe Controllers 00:36:38.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.403 Controller IO queue size 256, less than required. 00:36:38.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:38.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:38.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:38.403 Initialization complete. Launching workers. 00:36:38.403 ======================================================== 00:36:38.403 Latency(us) 00:36:38.403 Device Information : IOPS MiB/s Average min max 00:36:38.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19652.49 76.77 13031.24 3371.10 30295.38 00:36:38.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18422.50 71.96 13898.39 7421.62 27925.27 00:36:38.403 ======================================================== 00:36:38.403 Total : 38075.00 148.73 13450.81 3371.10 30295.38 00:36:38.403 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2339129 0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 0 idle 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339129 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.27 reactor_0' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339129 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.27 reactor_0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2339129 1 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 1 idle 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339135 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339135 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:38.403 10:54:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:38.404 10:54:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:38.404 10:54:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:38.404 10:54:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:38.404 10:54:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:38.404 10:54:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:38.404 10:54:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2339129 0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 0 idle 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339129 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.66 reactor_0' 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339129 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.66 reactor_0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2339129 1 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2339129 1 idle 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2339129 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2339129 -w 256 00:36:40.315 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2339135 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.16 reactor_1' 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2339135 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.16 reactor_1 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:40.575 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:40.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:40.836 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:40.837 10:54:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:40.837 rmmod nvme_tcp 00:36:40.837 rmmod nvme_fabrics 00:36:40.837 rmmod nvme_keyring 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2339129 ']' 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2339129 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2339129 ']' 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2339129 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2339129 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2339129' 00:36:40.837 killing process with pid 2339129 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2339129 00:36:40.837 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2339129 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:41.098 10:54:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.010 10:54:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:43.010 00:36:43.010 real 0m25.049s 00:36:43.010 user 0m40.453s 00:36:43.010 sys 0m9.288s 00:36:43.010 10:54:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.010 10:54:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:43.010 ************************************ 00:36:43.010 END TEST nvmf_interrupt 00:36:43.010 ************************************ 00:36:43.010 00:36:43.010 real 30m6.747s 00:36:43.010 user 61m31.296s 00:36:43.010 sys 10m21.961s 00:36:43.011 10:54:15 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.011 10:54:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:43.011 ************************************ 00:36:43.011 END TEST nvmf_tcp 00:36:43.011 ************************************ 00:36:43.273 10:54:15 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:43.273 10:54:15 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:43.273 10:54:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:43.273 10:54:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:43.273 10:54:15 -- common/autotest_common.sh@10 -- # set +x 00:36:43.273 ************************************ 00:36:43.273 START TEST spdkcli_nvmf_tcp 00:36:43.273 ************************************ 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:43.273 * Looking for test storage... 00:36:43.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:43.273 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.536 --rc genhtml_branch_coverage=1 00:36:43.536 --rc genhtml_function_coverage=1 00:36:43.536 --rc genhtml_legend=1 00:36:43.536 --rc geninfo_all_blocks=1 00:36:43.536 --rc geninfo_unexecuted_blocks=1 00:36:43.536 00:36:43.536 ' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.536 --rc genhtml_branch_coverage=1 00:36:43.536 --rc genhtml_function_coverage=1 00:36:43.536 --rc genhtml_legend=1 00:36:43.536 --rc geninfo_all_blocks=1 00:36:43.536 --rc geninfo_unexecuted_blocks=1 00:36:43.536 00:36:43.536 ' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.536 --rc genhtml_branch_coverage=1 00:36:43.536 --rc genhtml_function_coverage=1 00:36:43.536 --rc genhtml_legend=1 00:36:43.536 --rc geninfo_all_blocks=1 00:36:43.536 --rc geninfo_unexecuted_blocks=1 00:36:43.536 00:36:43.536 ' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.536 --rc genhtml_branch_coverage=1 00:36:43.536 --rc genhtml_function_coverage=1 00:36:43.536 --rc genhtml_legend=1 00:36:43.536 --rc geninfo_all_blocks=1 00:36:43.536 --rc geninfo_unexecuted_blocks=1 00:36:43.536 00:36:43.536 ' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:43.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:43.536 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2343260 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2343260 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2343260 ']' 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.537 10:54:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:43.537 [2024-11-20 10:54:15.763517] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:36:43.537 [2024-11-20 10:54:15.763583] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343260 ] 00:36:43.537 [2024-11-20 10:54:15.854747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:43.537 [2024-11-20 10:54:15.893501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.537 [2024-11-20 10:54:15.893504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.482 10:54:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:44.482 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:44.482 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:44.482 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:44.482 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:44.482 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:44.482 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:44.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:44.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:44.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:44.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:44.482 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:44.482 ' 00:36:47.028 [2024-11-20 10:54:19.288057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.413 [2024-11-20 10:54:20.648256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:50.956 [2024-11-20 10:54:23.183307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:53.605 [2024-11-20 10:54:25.409627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:54.986 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:54.986 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:54.986 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:54.986 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:54.987 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:54.987 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:54.987 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:54.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:54.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:54.987 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:54.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:54.987 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:54.987 10:54:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:55.557 10:54:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:55.557 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:55.557 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:55.557 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:55.557 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:55.557 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:55.557 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:55.557 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:55.557 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:55.557 ' 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:02.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:02.142 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:02.142 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:02.142 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2343260 ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343260' 00:37:02.142 killing process with pid 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2343260 ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2343260 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2343260 ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2343260 00:37:02.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2343260) - No such process 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2343260 is not found' 00:37:02.142 Process with pid 2343260 is not found 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:02.142 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:02.143 10:54:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:02.143 00:37:02.143 real 0m18.131s 00:37:02.143 user 0m40.274s 00:37:02.143 sys 0m0.878s 00:37:02.143 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.143 10:54:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.143 ************************************ 00:37:02.143 END TEST spdkcli_nvmf_tcp 00:37:02.143 ************************************ 00:37:02.143 10:54:33 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:02.143 10:54:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:02.143 10:54:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.143 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:37:02.143 ************************************ 00:37:02.143 START TEST nvmf_identify_passthru 00:37:02.143 ************************************ 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:02.143 * Looking for test storage... 00:37:02.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:02.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.143 --rc genhtml_branch_coverage=1 00:37:02.143 --rc genhtml_function_coverage=1 00:37:02.143 --rc genhtml_legend=1 00:37:02.143 --rc geninfo_all_blocks=1 00:37:02.143 --rc geninfo_unexecuted_blocks=1 00:37:02.143 00:37:02.143 ' 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:02.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.143 --rc genhtml_branch_coverage=1 00:37:02.143 --rc genhtml_function_coverage=1 00:37:02.143 --rc genhtml_legend=1 00:37:02.143 --rc geninfo_all_blocks=1 00:37:02.143 --rc geninfo_unexecuted_blocks=1 00:37:02.143 00:37:02.143 ' 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:02.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.143 --rc genhtml_branch_coverage=1 00:37:02.143 --rc genhtml_function_coverage=1 00:37:02.143 --rc genhtml_legend=1 00:37:02.143 --rc geninfo_all_blocks=1 00:37:02.143 --rc geninfo_unexecuted_blocks=1 00:37:02.143 00:37:02.143 ' 00:37:02.143 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:02.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.143 --rc genhtml_branch_coverage=1 00:37:02.143 --rc genhtml_function_coverage=1 00:37:02.143 --rc genhtml_legend=1 00:37:02.143 --rc geninfo_all_blocks=1 00:37:02.143 --rc geninfo_unexecuted_blocks=1 00:37:02.143 00:37:02.143 ' 00:37:02.143 10:54:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.143 10:54:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.143 10:54:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.143 10:54:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.143 10:54:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.143 10:54:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:02.143 10:54:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.143 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:02.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.144 10:54:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.144 10:54:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.144 10:54:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.144 10:54:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.144 10:54:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.144 10:54:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.144 10:54:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.144 10:54:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.144 10:54:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:02.144 10:54:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.144 10:54:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.144 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:02.144 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:02.144 10:54:33 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:02.144 10:54:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:08.727 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.727 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:08.727 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:08.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:08.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:08.728 10:54:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.728 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.728 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.728 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:08.728 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:08.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:37:08.989 00:37:08.989 --- 10.0.0.2 ping statistics --- 00:37:08.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.989 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:37:08.989 00:37:08.989 --- 10.0.0.1 ping statistics --- 00:37:08.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.989 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:08.989 10:54:41 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:08.989 10:54:41 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:08.989 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:09.561 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:09.561 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:09.561 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:09.561 10:54:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2350545 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:10.133 10:54:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2350545 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2350545 ']' 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.133 10:54:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.133 [2024-11-20 10:54:42.411807] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:37:10.133 [2024-11-20 10:54:42.411874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.394 [2024-11-20 10:54:42.510790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:10.394 [2024-11-20 10:54:42.564995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.394 [2024-11-20 10:54:42.565049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.394 [2024-11-20 10:54:42.565058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.394 [2024-11-20 10:54:42.565066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.394 [2024-11-20 10:54:42.565072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.394 [2024-11-20 10:54:42.567430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.394 [2024-11-20 10:54:42.567591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.394 [2024-11-20 10:54:42.567753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.394 [2024-11-20 10:54:42.567753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:10.965 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.965 INFO: Log level set to 20 00:37:10.965 INFO: Requests: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "method": "nvmf_set_config", 00:37:10.965 "id": 1, 00:37:10.965 "params": { 00:37:10.965 "admin_cmd_passthru": { 00:37:10.965 "identify_ctrlr": true 00:37:10.965 } 00:37:10.965 } 00:37:10.965 } 00:37:10.965 00:37:10.965 INFO: response: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "id": 1, 00:37:10.965 "result": true 00:37:10.965 } 00:37:10.965 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.965 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.965 INFO: Setting log level to 20 00:37:10.965 INFO: Setting log level to 20 00:37:10.965 INFO: Log level set to 20 00:37:10.965 INFO: Log level set to 20 00:37:10.965 INFO: Requests: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "method": "framework_start_init", 00:37:10.965 "id": 1 00:37:10.965 } 00:37:10.965 00:37:10.965 INFO: Requests: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "method": "framework_start_init", 00:37:10.965 "id": 1 00:37:10.965 } 00:37:10.965 00:37:10.965 [2024-11-20 10:54:43.300185] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:10.965 INFO: response: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "id": 1, 00:37:10.965 "result": true 00:37:10.965 } 00:37:10.965 00:37:10.965 INFO: response: 00:37:10.965 { 00:37:10.965 "jsonrpc": "2.0", 00:37:10.965 "id": 1, 00:37:10.965 "result": true 00:37:10.965 } 00:37:10.965 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.965 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.965 INFO: Setting log level to 40 00:37:10.965 INFO: Setting log level to 40 00:37:10.965 INFO: Setting log level to 40 00:37:10.965 [2024-11-20 10:54:43.313514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.965 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.965 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.227 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:11.227 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.227 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.489 Nvme0n1 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.489 [2024-11-20 10:54:43.704322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:11.489 [ 00:37:11.489 { 00:37:11.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:11.489 "subtype": "Discovery", 00:37:11.489 "listen_addresses": [], 00:37:11.489 "allow_any_host": true, 00:37:11.489 "hosts": [] 00:37:11.489 }, 00:37:11.489 { 00:37:11.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.489 "subtype": "NVMe", 00:37:11.489 "listen_addresses": [ 00:37:11.489 { 00:37:11.489 "trtype": "TCP", 00:37:11.489 "adrfam": "IPv4", 00:37:11.489 "traddr": "10.0.0.2", 00:37:11.489 "trsvcid": "4420" 00:37:11.489 } 00:37:11.489 ], 00:37:11.489 "allow_any_host": true, 00:37:11.489 "hosts": [], 00:37:11.489 "serial_number": "SPDK00000000000001", 00:37:11.489 "model_number": "SPDK bdev Controller", 00:37:11.489 "max_namespaces": 1, 00:37:11.489 "min_cntlid": 1, 00:37:11.489 "max_cntlid": 65519, 00:37:11.489 "namespaces": [ 00:37:11.489 { 00:37:11.489 "nsid": 1, 00:37:11.489 "bdev_name": "Nvme0n1", 00:37:11.489 "name": "Nvme0n1", 00:37:11.489 "nguid": "36344730526054870025384500000044", 00:37:11.489 "uuid": "36344730-5260-5487-0025-384500000044" 00:37:11.489 } 00:37:11.489 ] 00:37:11.489 } 00:37:11.489 ] 00:37:11.489 10:54:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:11.489 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:11.750 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:11.750 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:11.750 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:11.750 10:54:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:12.011 10:54:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.011 rmmod nvme_tcp 00:37:12.011 rmmod nvme_fabrics 00:37:12.011 rmmod nvme_keyring 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2350545 ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2350545 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2350545 ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2350545 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350545 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350545' 00:37:12.011 killing process with pid 2350545 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2350545 00:37:12.011 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2350545 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:12.272 10:54:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.272 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:12.272 10:54:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:14.813 10:54:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:14.813 00:37:14.813 real 0m13.028s 00:37:14.813 user 0m10.394s 00:37:14.813 sys 0m6.593s 00:37:14.813 10:54:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:14.813 10:54:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:14.813 ************************************ 00:37:14.813 END TEST nvmf_identify_passthru 00:37:14.813 ************************************ 00:37:14.813 10:54:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:14.813 10:54:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:14.813 10:54:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.813 10:54:46 -- common/autotest_common.sh@10 -- # set +x 00:37:14.813 ************************************ 00:37:14.813 START TEST nvmf_dif 00:37:14.813 ************************************ 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:14.813 * Looking for test storage... 00:37:14.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.813 10:54:46 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:14.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.813 --rc genhtml_branch_coverage=1 00:37:14.813 --rc genhtml_function_coverage=1 00:37:14.813 --rc genhtml_legend=1 00:37:14.813 --rc geninfo_all_blocks=1 00:37:14.813 --rc geninfo_unexecuted_blocks=1 00:37:14.813 00:37:14.813 ' 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:14.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.813 --rc genhtml_branch_coverage=1 00:37:14.813 --rc genhtml_function_coverage=1 00:37:14.813 --rc genhtml_legend=1 00:37:14.813 --rc geninfo_all_blocks=1 00:37:14.813 --rc geninfo_unexecuted_blocks=1 00:37:14.813 00:37:14.813 ' 00:37:14.813 10:54:46 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:14.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.813 --rc genhtml_branch_coverage=1 00:37:14.813 --rc genhtml_function_coverage=1 00:37:14.813 --rc genhtml_legend=1 00:37:14.813 --rc geninfo_all_blocks=1 00:37:14.813 --rc geninfo_unexecuted_blocks=1 00:37:14.813 00:37:14.813 ' 00:37:14.814 10:54:46 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:14.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.814 --rc genhtml_branch_coverage=1 00:37:14.814 --rc genhtml_function_coverage=1 00:37:14.814 --rc genhtml_legend=1 00:37:14.814 --rc geninfo_all_blocks=1 00:37:14.814 --rc geninfo_unexecuted_blocks=1 00:37:14.814 00:37:14.814 ' 00:37:14.814 10:54:46 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.814 10:54:46 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:14.814 10:54:46 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:14.814 10:54:46 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.814 10:54:46 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.814 10:54:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.814 10:54:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.814 10:54:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.814 10:54:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:14.814 10:54:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:14.814 10:54:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:14.814 10:54:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:14.814 10:54:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:14.814 10:54:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:14.814 10:54:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:14.814 10:54:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.814 10:54:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:14.814 10:54:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:14.814 10:54:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:14.814 10:54:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:22.956 10:54:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:22.956 10:54:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:22.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:22.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:22.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:22.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:22.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:22.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:37:22.957 00:37:22.957 --- 10.0.0.2 ping statistics --- 00:37:22.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.957 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:22.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:22.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:37:22.957 00:37:22.957 --- 10.0.0.1 ping statistics --- 00:37:22.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:22.957 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:22.957 10:54:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:25.504 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:25.504 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:25.504 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:25.765 10:54:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:25.765 10:54:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:25.765 10:54:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:25.765 10:54:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:25.765 10:54:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2356542 00:37:25.765 10:54:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2356542 00:37:25.765 10:54:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2356542 ']' 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.765 10:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:25.765 [2024-11-20 10:54:58.087178] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:37:25.765 [2024-11-20 10:54:58.087245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.026 [2024-11-20 10:54:58.183815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.026 [2024-11-20 10:54:58.235099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:26.026 [2024-11-20 10:54:58.235146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:26.026 [2024-11-20 10:54:58.235155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:26.026 [2024-11-20 10:54:58.235172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:26.026 [2024-11-20 10:54:58.235178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:26.026 [2024-11-20 10:54:58.235926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:26.677 10:54:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 10:54:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:26.677 10:54:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:26.677 10:54:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 [2024-11-20 10:54:58.933774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.677 10:54:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 ************************************ 00:37:26.677 START TEST fio_dif_1_default 00:37:26.677 ************************************ 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 bdev_null0 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.677 10:54:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.677 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:26.677 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.677 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:26.677 [2024-11-20 10:54:59.018123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:26.678 { 00:37:26.678 "params": { 00:37:26.678 "name": "Nvme$subsystem", 00:37:26.678 "trtype": "$TEST_TRANSPORT", 00:37:26.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.678 "adrfam": "ipv4", 00:37:26.678 "trsvcid": "$NVMF_PORT", 00:37:26.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.678 "hdgst": ${hdgst:-false}, 00:37:26.678 "ddgst": ${ddgst:-false} 00:37:26.678 }, 00:37:26.678 "method": "bdev_nvme_attach_controller" 00:37:26.678 } 00:37:26.678 EOF 00:37:26.678 )") 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:26.678 10:54:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:26.678 "params": { 00:37:26.678 "name": "Nvme0", 00:37:26.678 "trtype": "tcp", 00:37:26.678 "traddr": "10.0.0.2", 00:37:26.678 "adrfam": "ipv4", 00:37:26.678 "trsvcid": "4420", 00:37:26.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.678 "hdgst": false, 00:37:26.678 "ddgst": false 00:37:26.678 }, 00:37:26.678 "method": "bdev_nvme_attach_controller" 00:37:26.678 }' 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:26.942 10:54:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.225 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:27.225 fio-3.35 00:37:27.225 Starting 1 thread 00:37:39.483 00:37:39.483 filename0: (groupid=0, jobs=1): err= 0: pid=2357074: Wed Nov 20 10:55:10 2024 00:37:39.483 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10020msec) 00:37:39.483 slat (nsec): min=5401, max=36697, avg=6260.03, stdev=1703.12 00:37:39.483 clat (usec): min=611, max=42948, avg=40877.03, stdev=2590.34 00:37:39.483 lat (usec): min=616, max=42957, avg=40883.29, stdev=2590.45 00:37:39.483 clat percentiles (usec): 00:37:39.483 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:39.483 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:39.483 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:39.483 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:39.483 | 99.99th=[42730] 00:37:39.483 bw ( KiB/s): min= 384, max= 416, per=99.69%, avg=390.40, stdev=13.13, samples=20 00:37:39.483 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:37:39.483 lat (usec) : 750=0.41% 00:37:39.483 lat (msec) : 50=99.59% 00:37:39.483 cpu : usr=93.47%, sys=6.30%, ctx=13, majf=0, minf=232 00:37:39.483 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.483 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.483 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:39.483 00:37:39.483 Run status group 0 (all jobs): 00:37:39.483 READ: bw=391KiB/s (401kB/s), 391KiB/s-391KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10020-10020msec 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.483 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 00:37:39.484 real 0m11.288s 00:37:39.484 user 0m24.273s 00:37:39.484 sys 0m0.988s 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 ************************************ 00:37:39.484 END TEST fio_dif_1_default 00:37:39.484 ************************************ 00:37:39.484 10:55:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:39.484 10:55:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:39.484 10:55:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 ************************************ 00:37:39.484 START TEST fio_dif_1_multi_subsystems 00:37:39.484 ************************************ 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 bdev_null0 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 [2024-11-20 10:55:10.390263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 bdev_null1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.484 { 00:37:39.484 "params": { 00:37:39.484 "name": "Nvme$subsystem", 00:37:39.484 "trtype": "$TEST_TRANSPORT", 00:37:39.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.484 "adrfam": "ipv4", 00:37:39.484 "trsvcid": "$NVMF_PORT", 00:37:39.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.484 "hdgst": ${hdgst:-false}, 00:37:39.484 "ddgst": ${ddgst:-false} 00:37:39.484 }, 00:37:39.484 "method": "bdev_nvme_attach_controller" 00:37:39.484 } 00:37:39.484 EOF 00:37:39.484 )") 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.484 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.484 { 00:37:39.484 "params": { 00:37:39.484 "name": "Nvme$subsystem", 00:37:39.484 "trtype": "$TEST_TRANSPORT", 00:37:39.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.485 "adrfam": "ipv4", 00:37:39.485 "trsvcid": "$NVMF_PORT", 00:37:39.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.485 "hdgst": ${hdgst:-false}, 00:37:39.485 "ddgst": ${ddgst:-false} 00:37:39.485 }, 00:37:39.485 "method": "bdev_nvme_attach_controller" 00:37:39.485 } 00:37:39.485 EOF 00:37:39.485 )") 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:39.485 "params": { 00:37:39.485 "name": "Nvme0", 00:37:39.485 "trtype": "tcp", 00:37:39.485 "traddr": "10.0.0.2", 00:37:39.485 "adrfam": "ipv4", 00:37:39.485 "trsvcid": "4420", 00:37:39.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.485 "hdgst": false, 00:37:39.485 "ddgst": false 00:37:39.485 }, 00:37:39.485 "method": "bdev_nvme_attach_controller" 00:37:39.485 },{ 00:37:39.485 "params": { 00:37:39.485 "name": "Nvme1", 00:37:39.485 "trtype": "tcp", 00:37:39.485 "traddr": "10.0.0.2", 00:37:39.485 "adrfam": "ipv4", 00:37:39.485 "trsvcid": "4420", 00:37:39.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:39.485 "hdgst": false, 00:37:39.485 "ddgst": false 00:37:39.485 }, 00:37:39.485 "method": "bdev_nvme_attach_controller" 00:37:39.485 }' 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:39.485 10:55:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.485 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:39.485 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:39.485 fio-3.35 00:37:39.485 Starting 2 threads 00:37:49.487 00:37:49.487 filename0: (groupid=0, jobs=1): err= 0: pid=2359564: Wed Nov 20 10:55:21 2024 00:37:49.487 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10028msec) 00:37:49.487 slat (nsec): min=5433, max=66665, avg=6878.09, stdev=2657.20 00:37:49.487 clat (usec): min=658, max=42297, avg=41586.92, stdev=2675.81 00:37:49.487 lat (usec): min=666, max=42307, avg=41593.80, stdev=2675.67 00:37:49.487 clat percentiles (usec): 00:37:49.487 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:49.487 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:49.487 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:49.487 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:49.487 | 99.99th=[42206] 00:37:49.487 bw ( KiB/s): min= 352, max= 416, per=33.57%, avg=384.00, stdev=10.38, samples=20 00:37:49.487 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:37:49.487 lat (usec) : 750=0.41% 00:37:49.487 lat (msec) : 50=99.59% 00:37:49.487 cpu : usr=97.44%, sys=2.33%, ctx=18, majf=0, minf=236 00:37:49.487 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:49.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.487 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.487 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:49.487 filename1: (groupid=0, jobs=1): err= 0: pid=2359566: Wed Nov 20 10:55:21 2024 00:37:49.487 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:37:49.487 slat (nsec): min=5428, max=31243, avg=6438.86, stdev=1383.74 00:37:49.487 clat (usec): min=586, max=42432, avg=20990.40, stdev=20176.63 00:37:49.487 lat (usec): min=592, max=42441, avg=20996.84, stdev=20176.45 00:37:49.487 clat percentiles (usec): 00:37:49.487 | 1.00th=[ 685], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 816], 00:37:49.487 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 1172], 60.00th=[41157], 00:37:49.487 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:49.487 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:49.487 | 99.99th=[42206] 00:37:49.487 bw ( KiB/s): min= 672, max= 768, per=66.35%, avg=759.58, stdev=25.78, samples=19 00:37:49.487 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:37:49.487 lat (usec) : 750=3.62%, 1000=46.11% 00:37:49.487 lat (msec) : 2=0.26%, 50=50.00% 00:37:49.487 cpu : usr=97.47%, sys=2.22%, ctx=26, majf=0, minf=92 00:37:49.487 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:49.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.487 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.487 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:49.487 00:37:49.487 Run status group 0 (all jobs): 00:37:49.487 READ: bw=1144KiB/s (1171kB/s), 385KiB/s-762KiB/s (394kB/s-780kB/s), io=11.2MiB (11.7MB), run=10001-10028msec 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.487 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.749 00:37:49.749 real 0m11.521s 00:37:49.749 user 0m35.786s 00:37:49.749 sys 0m0.790s 00:37:49.749 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 ************************************ 00:37:49.749 END TEST fio_dif_1_multi_subsystems 00:37:49.749 ************************************ 00:37:49.749 10:55:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:49.749 10:55:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:49.749 10:55:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 ************************************ 00:37:49.749 START TEST fio_dif_rand_params 00:37:49.749 ************************************ 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 bdev_null0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.749 [2024-11-20 10:55:21.992658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:49.749 { 00:37:49.749 "params": { 00:37:49.749 "name": "Nvme$subsystem", 00:37:49.749 "trtype": "$TEST_TRANSPORT", 00:37:49.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.749 "adrfam": "ipv4", 00:37:49.749 "trsvcid": "$NVMF_PORT", 00:37:49.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.749 "hdgst": ${hdgst:-false}, 00:37:49.749 "ddgst": ${ddgst:-false} 00:37:49.749 }, 00:37:49.749 "method": "bdev_nvme_attach_controller" 00:37:49.749 } 00:37:49.749 EOF 00:37:49.749 )") 00:37:49.749 10:55:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:49.749 "params": { 00:37:49.749 "name": "Nvme0", 00:37:49.749 "trtype": "tcp", 00:37:49.749 "traddr": "10.0.0.2", 00:37:49.749 "adrfam": "ipv4", 00:37:49.749 "trsvcid": "4420", 00:37:49.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:49.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:49.749 "hdgst": false, 00:37:49.749 "ddgst": false 00:37:49.749 }, 00:37:49.749 "method": "bdev_nvme_attach_controller" 00:37:49.749 }' 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:49.749 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:49.750 10:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.348 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:50.348 ... 00:37:50.348 fio-3.35 00:37:50.348 Starting 3 threads 00:37:56.928 00:37:56.928 filename0: (groupid=0, jobs=1): err= 0: pid=2361786: Wed Nov 20 10:55:28 2024 00:37:56.928 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(126MiB/5015msec) 00:37:56.928 slat (nsec): min=5613, max=34946, avg=8892.70, stdev=2776.04 00:37:56.928 clat (msec): min=3, max=130, avg=14.97, stdev=18.68 00:37:56.928 lat (msec): min=3, max=130, avg=14.98, stdev=18.68 00:37:56.928 clat percentiles (msec): 00:37:56.928 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:37:56.928 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:37:56.928 | 70.00th=[ 9], 80.00th=[ 11], 90.00th=[ 48], 95.00th=[ 51], 00:37:56.928 | 99.00th=[ 89], 99.50th=[ 90], 99.90th=[ 129], 99.95th=[ 131], 00:37:56.928 | 99.99th=[ 131] 00:37:56.928 bw ( KiB/s): min=18432, max=37120, per=24.89%, avg=25625.60, stdev=6334.76, samples=10 00:37:56.928 iops : min= 144, max= 290, avg=200.20, stdev=49.49, samples=10 00:37:56.928 lat (msec) : 4=0.30%, 10=78.39%, 20=4.98%, 50=11.75%, 100=4.38% 00:37:56.928 lat (msec) : 250=0.20% 00:37:56.928 cpu : usr=92.16%, sys=5.68%, ctx=373, majf=0, minf=87 00:37:56.928 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.928 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:56.928 filename0: (groupid=0, jobs=1): err= 0: pid=2361787: Wed Nov 20 10:55:28 2024 00:37:56.928 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(165MiB/5021msec) 00:37:56.928 slat (nsec): min=5403, max=96548, avg=8138.80, stdev=2919.51 00:37:56.928 clat (usec): min=3487, max=89707, avg=11432.98, stdev=14837.87 00:37:56.928 lat (usec): min=3496, max=89715, avg=11441.12, stdev=14837.93 00:37:56.928 clat percentiles (usec): 00:37:56.928 | 1.00th=[ 3654], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5080], 00:37:56.928 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6915], 00:37:56.928 | 70.00th=[ 7439], 80.00th=[ 7963], 90.00th=[46400], 95.00th=[47973], 00:37:56.928 | 99.00th=[51643], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:37:56.928 | 99.99th=[89654] 00:37:56.928 bw ( KiB/s): min=20480, max=48640, per=32.66%, avg=33620.80, stdev=9351.01, samples=10 00:37:56.928 iops : min= 160, max= 380, avg=262.60, stdev=73.01, samples=10 00:37:56.928 lat (msec) : 4=3.57%, 10=84.42%, 20=0.08%, 50=10.56%, 100=1.37% 00:37:56.928 cpu : usr=96.53%, sys=3.23%, ctx=9, majf=0, minf=155 00:37:56.928 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 issued rwts: total=1316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.928 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:56.928 filename0: (groupid=0, jobs=1): err= 0: pid=2361788: Wed Nov 20 10:55:28 2024 00:37:56.928 read: IOPS=343, BW=42.9MiB/s (45.0MB/s)(216MiB/5033msec) 00:37:56.928 slat (nsec): min=7902, max=75035, avg=8861.05, stdev=2186.60 00:37:56.928 clat (usec): min=3515, max=88342, avg=8725.42, stdev=9832.21 00:37:56.928 lat (usec): min=3524, max=88351, avg=8734.28, stdev=9832.36 00:37:56.928 clat percentiles (usec): 00:37:56.928 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 5014], 00:37:56.928 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6390], 60.00th=[ 7046], 00:37:56.928 | 70.00th=[ 7767], 80.00th=[ 8586], 90.00th=[ 9896], 95.00th=[12125], 00:37:56.928 | 99.00th=[48497], 99.50th=[50070], 99.90th=[87557], 99.95th=[88605], 00:37:56.928 | 99.99th=[88605] 00:37:56.928 bw ( KiB/s): min=34560, max=58112, per=42.89%, avg=44160.00, stdev=7998.83, samples=10 00:37:56.928 iops : min= 270, max= 454, avg=345.00, stdev=62.49, samples=10 00:37:56.928 lat (msec) : 4=1.50%, 10=89.00%, 20=4.69%, 50=4.40%, 100=0.41% 00:37:56.928 cpu : usr=93.44%, sys=6.28%, ctx=15, majf=0, minf=65 00:37:56.928 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:56.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:56.928 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:56.928 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:56.928 00:37:56.928 Run status group 0 (all jobs): 00:37:56.928 READ: bw=101MiB/s (105MB/s), 25.0MiB/s-42.9MiB/s (26.2MB/s-45.0MB/s), io=506MiB (531MB), run=5015-5033msec 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 bdev_null0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 [2024-11-20 10:55:28.322390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 bdev_null1 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:56.928 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.929 bdev_null2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:56.929 { 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme$subsystem", 00:37:56.929 "trtype": "$TEST_TRANSPORT", 00:37:56.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "$NVMF_PORT", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.929 "hdgst": ${hdgst:-false}, 00:37:56.929 "ddgst": ${ddgst:-false} 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 } 00:37:56.929 EOF 00:37:56.929 )") 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:56.929 { 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme$subsystem", 00:37:56.929 "trtype": "$TEST_TRANSPORT", 00:37:56.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "$NVMF_PORT", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.929 "hdgst": ${hdgst:-false}, 00:37:56.929 "ddgst": ${ddgst:-false} 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 } 00:37:56.929 EOF 00:37:56.929 )") 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:56.929 { 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme$subsystem", 00:37:56.929 "trtype": "$TEST_TRANSPORT", 00:37:56.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "$NVMF_PORT", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.929 "hdgst": ${hdgst:-false}, 00:37:56.929 "ddgst": ${ddgst:-false} 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 } 00:37:56.929 EOF 00:37:56.929 )") 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme0", 00:37:56.929 "trtype": "tcp", 00:37:56.929 "traddr": "10.0.0.2", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "4420", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:56.929 "hdgst": false, 00:37:56.929 "ddgst": false 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 },{ 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme1", 00:37:56.929 "trtype": "tcp", 00:37:56.929 "traddr": "10.0.0.2", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "4420", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:56.929 "hdgst": false, 00:37:56.929 "ddgst": false 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 },{ 00:37:56.929 "params": { 00:37:56.929 "name": "Nvme2", 00:37:56.929 "trtype": "tcp", 00:37:56.929 "traddr": "10.0.0.2", 00:37:56.929 "adrfam": "ipv4", 00:37:56.929 "trsvcid": "4420", 00:37:56.929 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:56.929 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:56.929 "hdgst": false, 00:37:56.929 "ddgst": false 00:37:56.929 }, 00:37:56.929 "method": "bdev_nvme_attach_controller" 00:37:56.929 }' 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:56.929 10:55:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:56.929 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:56.929 ... 00:37:56.929 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:56.929 ... 00:37:56.929 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:56.929 ... 00:37:56.929 fio-3.35 00:37:56.929 Starting 24 threads 00:38:09.166 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363298: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10009msec) 00:38:09.166 slat (usec): min=5, max=130, avg=25.43, stdev=19.96 00:38:09.166 clat (usec): min=10887, max=28130, avg=23693.02, stdev=1206.51 00:38:09.166 lat (usec): min=10907, max=28141, avg=23718.45, stdev=1203.97 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[21103], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:38:09.166 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:09.166 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.166 | 99.00th=[24773], 99.50th=[26084], 99.90th=[28181], 99.95th=[28181], 00:38:09.166 | 99.99th=[28181] 00:38:09.166 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2680.32, stdev=51.71, samples=19 00:38:09.166 iops : min= 640, max= 704, avg=670.00, stdev=12.93, samples=19 00:38:09.166 lat (msec) : 20=0.95%, 50=99.05% 00:38:09.166 cpu : usr=98.97%, sys=0.76%, ctx=13, majf=0, minf=38 00:38:09.166 IO depths : 1=6.0%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:09.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363299: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10011msec) 00:38:09.166 slat (usec): min=5, max=129, avg=33.65, stdev=21.73 00:38:09.166 clat (usec): min=6951, max=44095, avg=23569.07, stdev=2908.74 00:38:09.166 lat (usec): min=6959, max=44104, avg=23602.72, stdev=2909.42 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[10683], 5.00th=[21627], 10.00th=[23200], 20.00th=[23200], 00:38:09.166 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.166 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.166 | 99.00th=[38011], 99.50th=[41157], 99.90th=[43254], 99.95th=[43779], 00:38:09.166 | 99.99th=[44303] 00:38:09.166 bw ( KiB/s): min= 2560, max= 2992, per=4.17%, avg=2689.58, stdev=83.41, samples=19 00:38:09.166 iops : min= 640, max= 748, avg=672.32, stdev=20.86, samples=19 00:38:09.166 lat (msec) : 10=0.86%, 20=3.60%, 50=95.53% 00:38:09.166 cpu : usr=98.82%, sys=0.89%, ctx=22, majf=0, minf=24 00:38:09.166 IO depths : 1=5.4%, 2=10.8%, 4=22.6%, 8=53.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:09.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363300: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10011msec) 00:38:09.166 slat (nsec): min=5603, max=61482, avg=11170.92, stdev=7271.39 00:38:09.166 clat (usec): min=2519, max=25153, avg=23628.98, stdev=1898.53 00:38:09.166 lat (usec): min=2536, max=25185, avg=23640.15, stdev=1897.75 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[ 9503], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:09.166 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:38:09.166 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.166 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:38:09.166 | 99.99th=[25035] 00:38:09.166 bw ( KiB/s): min= 2554, max= 3072, per=4.19%, avg=2700.53, stdev=95.02, samples=19 00:38:09.166 iops : min= 638, max= 768, avg=675.05, stdev=23.81, samples=19 00:38:09.166 lat (msec) : 4=0.13%, 10=0.92%, 20=0.61%, 50=98.34% 00:38:09.166 cpu : usr=98.91%, sys=0.83%, ctx=10, majf=0, minf=38 00:38:09.166 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:09.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363301: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10006msec) 00:38:09.166 slat (nsec): min=5083, max=87747, avg=25494.80, stdev=13226.79 00:38:09.166 clat (usec): min=9696, max=43684, avg=23731.63, stdev=1176.51 00:38:09.166 lat (usec): min=9727, max=43698, avg=23757.12, stdev=1175.49 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:38:09.166 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:09.166 | 70.00th=[23987], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.166 | 99.00th=[24511], 99.50th=[24511], 99.90th=[38011], 99.95th=[38011], 00:38:09.166 | 99.99th=[43779] 00:38:09.166 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2666.79, stdev=46.84, samples=19 00:38:09.166 iops : min= 640, max= 672, avg=666.58, stdev=11.71, samples=19 00:38:09.166 lat (msec) : 10=0.13%, 20=0.61%, 50=99.25% 00:38:09.166 cpu : usr=98.67%, sys=0.90%, ctx=59, majf=0, minf=23 00:38:09.166 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363302: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=673, BW=2696KiB/s (2761kB/s)(26.4MiB/10009msec) 00:38:09.166 slat (usec): min=5, max=101, avg=27.16, stdev=17.64 00:38:09.166 clat (usec): min=5136, max=41929, avg=23513.42, stdev=2550.87 00:38:09.166 lat (usec): min=5165, max=41943, avg=23540.58, stdev=2552.05 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[13960], 5.00th=[20841], 10.00th=[23200], 20.00th=[23462], 00:38:09.166 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:09.166 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:09.166 | 99.00th=[32637], 99.50th=[34866], 99.90th=[40633], 99.95th=[41681], 00:38:09.166 | 99.99th=[41681] 00:38:09.166 bw ( KiB/s): min= 2528, max= 2800, per=4.17%, avg=2688.53, stdev=63.23, samples=19 00:38:09.166 iops : min= 632, max= 700, avg=672.11, stdev=15.81, samples=19 00:38:09.166 lat (msec) : 10=0.42%, 20=4.06%, 50=95.52% 00:38:09.166 cpu : usr=98.86%, sys=0.83%, ctx=73, majf=0, minf=23 00:38:09.166 IO depths : 1=4.7%, 2=9.3%, 4=19.4%, 8=57.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:38:09.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 complete : 0=0.0%, 4=92.7%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.166 issued rwts: total=6746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.166 filename0: (groupid=0, jobs=1): err= 0: pid=2363303: Wed Nov 20 10:55:40 2024 00:38:09.166 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10015msec) 00:38:09.166 slat (usec): min=5, max=102, avg=16.73, stdev=15.69 00:38:09.166 clat (usec): min=14452, max=29001, avg=23772.14, stdev=810.24 00:38:09.166 lat (usec): min=14481, max=29011, avg=23788.87, stdev=808.06 00:38:09.166 clat percentiles (usec): 00:38:09.166 | 1.00th=[20055], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:09.166 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.166 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.166 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28443], 99.95th=[28443], 00:38:09.166 | 99.99th=[28967] 00:38:09.166 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2673.58, stdev=41.14, samples=19 00:38:09.166 iops : min= 638, max= 672, avg=668.32, stdev=10.36, samples=19 00:38:09.166 lat (msec) : 20=0.84%, 50=99.16% 00:38:09.167 cpu : usr=98.57%, sys=0.98%, ctx=64, majf=0, minf=27 00:38:09.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename0: (groupid=0, jobs=1): err= 0: pid=2363304: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=721, BW=2886KiB/s (2956kB/s)(28.2MiB/10008msec) 00:38:09.167 slat (usec): min=5, max=122, avg= 7.91, stdev= 4.58 00:38:09.167 clat (usec): min=750, max=28335, avg=22111.38, stdev=5169.46 00:38:09.167 lat (usec): min=762, max=28341, avg=22119.29, stdev=5168.62 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[ 1385], 5.00th=[ 5932], 10.00th=[19268], 20.00th=[22938], 00:38:09.167 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.167 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:09.167 | 99.00th=[25822], 99.50th=[26870], 99.90th=[28181], 99.95th=[28181], 00:38:09.167 | 99.99th=[28443] 00:38:09.167 bw ( KiB/s): min= 2560, max= 4736, per=4.50%, avg=2898.68, stdev=475.75, samples=19 00:38:09.167 iops : min= 640, max= 1184, avg=724.63, stdev=118.94, samples=19 00:38:09.167 lat (usec) : 1000=0.06% 00:38:09.167 lat (msec) : 2=2.69%, 4=0.36%, 10=2.89%, 20=4.69%, 50=89.31% 00:38:09.167 cpu : usr=98.72%, sys=0.99%, ctx=14, majf=0, minf=56 00:38:09.167 IO depths : 1=2.8%, 2=7.4%, 4=19.6%, 8=60.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=92.8%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=7222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename0: (groupid=0, jobs=1): err= 0: pid=2363305: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10006msec) 00:38:09.167 slat (nsec): min=4837, max=98454, avg=32179.14, stdev=15287.97 00:38:09.167 clat (usec): min=10218, max=38688, avg=23594.60, stdev=1618.55 00:38:09.167 lat (usec): min=10265, max=38702, avg=23626.78, stdev=1619.85 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.167 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.167 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.167 | 99.00th=[27657], 99.50th=[31589], 99.90th=[38536], 99.95th=[38536], 00:38:09.167 | 99.99th=[38536] 00:38:09.167 bw ( KiB/s): min= 2544, max= 2832, per=4.15%, avg=2674.11, stdev=66.26, samples=19 00:38:09.167 iops : min= 636, max= 708, avg=668.42, stdev=16.55, samples=19 00:38:09.167 lat (msec) : 20=2.07%, 50=97.93% 00:38:09.167 cpu : usr=99.06%, sys=0.64%, ctx=24, majf=0, minf=26 00:38:09.167 IO depths : 1=5.3%, 2=11.2%, 4=23.8%, 8=52.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363306: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10005msec) 00:38:09.167 slat (usec): min=5, max=125, avg=34.10, stdev=18.88 00:38:09.167 clat (usec): min=3356, max=60144, avg=23637.63, stdev=2084.95 00:38:09.167 lat (usec): min=3364, max=60170, avg=23671.73, stdev=2084.92 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[17695], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.167 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.167 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.167 | 99.00th=[26608], 99.50th=[28443], 99.90th=[56361], 99.95th=[56361], 00:38:09.167 | 99.99th=[60031] 00:38:09.167 bw ( KiB/s): min= 2432, max= 2752, per=4.13%, avg=2664.05, stdev=71.34, samples=19 00:38:09.167 iops : min= 608, max= 688, avg=665.95, stdev=17.82, samples=19 00:38:09.167 lat (msec) : 4=0.09%, 10=0.07%, 20=0.94%, 50=98.65%, 100=0.24% 00:38:09.167 cpu : usr=98.96%, sys=0.73%, ctx=53, majf=0, minf=27 00:38:09.167 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363307: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10006msec) 00:38:09.167 slat (usec): min=4, max=118, avg=39.08, stdev=20.87 00:38:09.167 clat (usec): min=9946, max=37999, avg=23550.67, stdev=1151.90 00:38:09.167 lat (usec): min=10009, max=38014, avg=23589.75, stdev=1152.80 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23200], 00:38:09.167 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:38:09.167 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:38:09.167 | 99.00th=[24249], 99.50th=[24511], 99.90th=[38011], 99.95th=[38011], 00:38:09.167 | 99.99th=[38011] 00:38:09.167 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2666.79, stdev=46.84, samples=19 00:38:09.167 iops : min= 640, max= 672, avg=666.58, stdev=11.71, samples=19 00:38:09.167 lat (msec) : 10=0.03%, 20=0.69%, 50=99.28% 00:38:09.167 cpu : usr=98.73%, sys=0.90%, ctx=43, majf=0, minf=35 00:38:09.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363308: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.2MiB/10012msec) 00:38:09.167 slat (nsec): min=5562, max=65259, avg=13224.13, stdev=8628.71 00:38:09.167 clat (usec): min=4768, max=30254, avg=23787.22, stdev=1035.83 00:38:09.167 lat (usec): min=4774, max=30273, avg=23800.45, stdev=1036.24 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:09.167 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.167 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.167 | 99.00th=[24773], 99.50th=[25035], 99.90th=[30278], 99.95th=[30278], 00:38:09.167 | 99.99th=[30278] 00:38:09.167 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2666.84, stdev=47.58, samples=19 00:38:09.167 iops : min= 640, max= 672, avg=666.63, stdev=11.87, samples=19 00:38:09.167 lat (msec) : 10=0.24%, 20=0.45%, 50=99.31% 00:38:09.167 cpu : usr=98.94%, sys=0.73%, ctx=70, majf=0, minf=48 00:38:09.167 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363309: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=674, BW=2699KiB/s (2764kB/s)(26.4MiB/10009msec) 00:38:09.167 slat (usec): min=5, max=135, avg=18.37, stdev=19.78 00:38:09.167 clat (usec): min=4572, max=31661, avg=23558.56, stdev=1874.47 00:38:09.167 lat (usec): min=4578, max=31668, avg=23576.94, stdev=1872.92 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[13566], 5.00th=[22414], 10.00th=[23200], 20.00th=[23462], 00:38:09.167 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.167 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:38:09.167 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28443], 99.95th=[28967], 00:38:09.167 | 99.99th=[31589] 00:38:09.167 bw ( KiB/s): min= 2554, max= 2944, per=4.19%, avg=2701.37, stdev=90.99, samples=19 00:38:09.167 iops : min= 638, max= 736, avg=675.26, stdev=22.81, samples=19 00:38:09.167 lat (msec) : 10=0.65%, 20=1.81%, 50=97.54% 00:38:09.167 cpu : usr=99.03%, sys=0.66%, ctx=24, majf=0, minf=37 00:38:09.167 IO depths : 1=4.7%, 2=10.3%, 4=22.8%, 8=54.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363310: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10005msec) 00:38:09.167 slat (usec): min=5, max=114, avg=36.34, stdev=19.50 00:38:09.167 clat (usec): min=14228, max=26234, avg=23611.96, stdev=666.59 00:38:09.167 lat (usec): min=14250, max=26251, avg=23648.30, stdev=666.32 00:38:09.167 clat percentiles (usec): 00:38:09.167 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.167 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.167 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.167 | 99.00th=[24511], 99.50th=[24773], 99.90th=[26084], 99.95th=[26084], 00:38:09.167 | 99.99th=[26346] 00:38:09.167 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.47, stdev=47.83, samples=19 00:38:09.167 iops : min= 640, max= 672, avg=666.84, stdev=11.95, samples=19 00:38:09.167 lat (msec) : 20=0.48%, 50=99.52% 00:38:09.167 cpu : usr=98.70%, sys=1.01%, ctx=27, majf=0, minf=29 00:38:09.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:09.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.167 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.167 filename1: (groupid=0, jobs=1): err= 0: pid=2363311: Wed Nov 20 10:55:40 2024 00:38:09.167 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10011msec) 00:38:09.167 slat (usec): min=5, max=137, avg=15.29, stdev=15.27 00:38:09.167 clat (usec): min=6407, max=25396, avg=23659.65, stdev=1607.29 00:38:09.167 lat (usec): min=6413, max=25403, avg=23674.93, stdev=1604.52 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[14615], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:38:09.168 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.168 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.168 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:38:09.168 | 99.99th=[25297] 00:38:09.168 bw ( KiB/s): min= 2554, max= 2944, per=4.18%, avg=2693.79, stdev=67.88, samples=19 00:38:09.168 iops : min= 638, max= 736, avg=673.37, stdev=17.04, samples=19 00:38:09.168 lat (msec) : 10=0.71%, 20=0.71%, 50=98.57% 00:38:09.168 cpu : usr=98.96%, sys=0.66%, ctx=49, majf=0, minf=41 00:38:09.168 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename1: (groupid=0, jobs=1): err= 0: pid=2363312: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:38:09.168 slat (usec): min=5, max=126, avg=20.77, stdev=22.14 00:38:09.168 clat (usec): min=9333, max=29819, avg=23665.86, stdev=1367.01 00:38:09.168 lat (usec): min=9365, max=29825, avg=23686.63, stdev=1363.43 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[15008], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.168 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.168 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28705], 99.95th=[29754], 00:38:09.168 | 99.99th=[29754] 00:38:09.168 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2687.05, stdev=73.94, samples=19 00:38:09.168 iops : min= 640, max= 736, avg=671.68, stdev=18.49, samples=19 00:38:09.168 lat (msec) : 10=0.31%, 20=1.15%, 50=98.54% 00:38:09.168 cpu : usr=98.51%, sys=1.04%, ctx=33, majf=0, minf=46 00:38:09.168 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename1: (groupid=0, jobs=1): err= 0: pid=2363313: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10006msec) 00:38:09.168 slat (usec): min=5, max=126, avg=34.92, stdev=20.13 00:38:09.168 clat (usec): min=9705, max=38535, avg=23588.47, stdev=1171.14 00:38:09.168 lat (usec): min=9712, max=38552, avg=23623.40, stdev=1171.47 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.168 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:38:09.168 | 99.00th=[24511], 99.50th=[24511], 99.90th=[38536], 99.95th=[38536], 00:38:09.168 | 99.99th=[38536] 00:38:09.168 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2666.79, stdev=46.84, samples=19 00:38:09.168 iops : min= 640, max= 672, avg=666.58, stdev=11.71, samples=19 00:38:09.168 lat (msec) : 10=0.16%, 20=0.55%, 50=99.28% 00:38:09.168 cpu : usr=99.07%, sys=0.63%, ctx=22, majf=0, minf=32 00:38:09.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename2: (groupid=0, jobs=1): err= 0: pid=2363314: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=660, BW=2642KiB/s (2705kB/s)(25.8MiB/10006msec) 00:38:09.168 slat (usec): min=5, max=153, avg=25.55, stdev=27.60 00:38:09.168 clat (usec): min=5651, max=56961, avg=24074.59, stdev=4117.94 00:38:09.168 lat (usec): min=5659, max=56977, avg=24100.14, stdev=4118.19 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[14091], 5.00th=[17433], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.168 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[32375], 00:38:09.168 | 99.00th=[39060], 99.50th=[43779], 99.90th=[56886], 99.95th=[56886], 00:38:09.168 | 99.99th=[56886] 00:38:09.168 bw ( KiB/s): min= 2436, max= 2688, per=4.08%, avg=2631.37, stdev=59.01, samples=19 00:38:09.168 iops : min= 609, max= 672, avg=657.74, stdev=14.74, samples=19 00:38:09.168 lat (msec) : 10=0.79%, 20=5.52%, 50=93.45%, 100=0.24% 00:38:09.168 cpu : usr=99.01%, sys=0.68%, ctx=28, majf=0, minf=43 00:38:09.168 IO depths : 1=1.1%, 2=2.1%, 4=6.6%, 8=75.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename2: (groupid=0, jobs=1): err= 0: pid=2363315: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10005msec) 00:38:09.168 slat (usec): min=5, max=121, avg=37.91, stdev=20.40 00:38:09.168 clat (usec): min=14282, max=26391, avg=23589.89, stdev=678.79 00:38:09.168 lat (usec): min=14293, max=26408, avg=23627.79, stdev=678.75 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.168 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.168 | 99.00th=[24511], 99.50th=[24511], 99.90th=[26346], 99.95th=[26346], 00:38:09.168 | 99.99th=[26346] 00:38:09.168 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2667.47, stdev=47.83, samples=19 00:38:09.168 iops : min= 640, max= 672, avg=666.84, stdev=11.95, samples=19 00:38:09.168 lat (msec) : 20=0.48%, 50=99.52% 00:38:09.168 cpu : usr=98.90%, sys=0.70%, ctx=113, majf=0, minf=35 00:38:09.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename2: (groupid=0, jobs=1): err= 0: pid=2363316: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10005msec) 00:38:09.168 slat (usec): min=5, max=108, avg=33.91, stdev=18.57 00:38:09.168 clat (usec): min=7968, max=43047, avg=23646.70, stdev=1605.91 00:38:09.168 lat (usec): min=7978, max=43071, avg=23680.61, stdev=1605.14 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.168 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.168 | 99.00th=[24511], 99.50th=[24773], 99.90th=[42730], 99.95th=[43254], 00:38:09.168 | 99.99th=[43254] 00:38:09.168 bw ( KiB/s): min= 2560, max= 2693, per=4.13%, avg=2663.21, stdev=49.43, samples=19 00:38:09.168 iops : min= 640, max= 673, avg=665.74, stdev=12.33, samples=19 00:38:09.168 lat (msec) : 10=0.27%, 20=0.60%, 50=99.13% 00:38:09.168 cpu : usr=98.54%, sys=1.06%, ctx=119, majf=0, minf=27 00:38:09.168 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename2: (groupid=0, jobs=1): err= 0: pid=2363317: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.2MiB/10004msec) 00:38:09.168 slat (nsec): min=5599, max=65704, avg=13490.20, stdev=8190.87 00:38:09.168 clat (usec): min=7980, max=26021, avg=23701.62, stdev=1408.84 00:38:09.168 lat (usec): min=7990, max=26035, avg=23715.11, stdev=1407.94 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[15008], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:38:09.168 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.168 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.168 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25560], 99.95th=[25822], 00:38:09.168 | 99.99th=[26084] 00:38:09.168 bw ( KiB/s): min= 2554, max= 2944, per=4.17%, avg=2687.05, stdev=74.51, samples=19 00:38:09.168 iops : min= 638, max= 736, avg=671.68, stdev=18.68, samples=19 00:38:09.168 lat (msec) : 10=0.48%, 20=0.95%, 50=98.57% 00:38:09.168 cpu : usr=98.97%, sys=0.72%, ctx=27, majf=0, minf=38 00:38:09.168 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:09.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.168 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.168 filename2: (groupid=0, jobs=1): err= 0: pid=2363318: Wed Nov 20 10:55:40 2024 00:38:09.168 read: IOPS=671, BW=2686KiB/s (2750kB/s)(26.2MiB/10009msec) 00:38:09.168 slat (usec): min=5, max=129, avg=24.76, stdev=22.62 00:38:09.168 clat (usec): min=9183, max=24897, avg=23626.67, stdev=1335.80 00:38:09.168 lat (usec): min=9200, max=24914, avg=23651.44, stdev=1332.73 00:38:09.168 clat percentiles (usec): 00:38:09.168 | 1.00th=[15008], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.168 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:09.168 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.168 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:38:09.168 | 99.99th=[24773] 00:38:09.168 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2687.05, stdev=73.94, samples=19 00:38:09.168 iops : min= 640, max= 736, avg=671.68, stdev=18.49, samples=19 00:38:09.168 lat (msec) : 10=0.48%, 20=0.71%, 50=98.81% 00:38:09.168 cpu : usr=98.79%, sys=0.77%, ctx=45, majf=0, minf=36 00:38:09.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.169 filename2: (groupid=0, jobs=1): err= 0: pid=2363319: Wed Nov 20 10:55:40 2024 00:38:09.169 read: IOPS=667, BW=2672KiB/s (2736kB/s)(26.1MiB/10012msec) 00:38:09.169 slat (nsec): min=5621, max=90477, avg=32099.48, stdev=16519.77 00:38:09.169 clat (usec): min=15170, max=30298, avg=23656.13, stdev=740.77 00:38:09.169 lat (usec): min=15176, max=30316, avg=23688.22, stdev=741.09 00:38:09.169 clat percentiles (usec): 00:38:09.169 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.169 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.169 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.169 | 99.00th=[24773], 99.50th=[25297], 99.90th=[30278], 99.95th=[30278], 00:38:09.169 | 99.99th=[30278] 00:38:09.169 bw ( KiB/s): min= 2554, max= 2688, per=4.14%, avg=2666.84, stdev=48.47, samples=19 00:38:09.169 iops : min= 638, max= 672, avg=666.63, stdev=12.17, samples=19 00:38:09.169 lat (msec) : 20=0.72%, 50=99.28% 00:38:09.169 cpu : usr=98.91%, sys=0.78%, ctx=58, majf=0, minf=48 00:38:09.169 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:09.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.169 filename2: (groupid=0, jobs=1): err= 0: pid=2363320: Wed Nov 20 10:55:40 2024 00:38:09.169 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10005msec) 00:38:09.169 slat (usec): min=5, max=114, avg=36.42, stdev=18.92 00:38:09.169 clat (usec): min=8993, max=43184, avg=23632.21, stdev=1550.03 00:38:09.169 lat (usec): min=9003, max=43204, avg=23668.63, stdev=1549.83 00:38:09.169 clat percentiles (usec): 00:38:09.169 | 1.00th=[20841], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:38:09.169 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:38:09.169 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:38:09.169 | 99.00th=[24773], 99.50th=[27395], 99.90th=[43254], 99.95th=[43254], 00:38:09.169 | 99.99th=[43254] 00:38:09.169 bw ( KiB/s): min= 2475, max= 2693, per=4.13%, avg=2662.11, stdev=60.44, samples=19 00:38:09.169 iops : min= 618, max= 673, avg=665.42, stdev=15.21, samples=19 00:38:09.169 lat (msec) : 10=0.27%, 20=0.48%, 50=99.25% 00:38:09.169 cpu : usr=98.41%, sys=1.04%, ctx=181, majf=0, minf=32 00:38:09.169 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:09.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.169 filename2: (groupid=0, jobs=1): err= 0: pid=2363321: Wed Nov 20 10:55:40 2024 00:38:09.169 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10006msec) 00:38:09.169 slat (usec): min=5, max=106, avg=31.66, stdev=15.87 00:38:09.169 clat (usec): min=9844, max=38062, avg=23673.39, stdev=1374.04 00:38:09.169 lat (usec): min=9867, max=38080, avg=23705.05, stdev=1374.43 00:38:09.169 clat percentiles (usec): 00:38:09.169 | 1.00th=[18482], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:38:09.169 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:38:09.169 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:38:09.169 | 99.00th=[28443], 99.50th=[29230], 99.90th=[38011], 99.95th=[38011], 00:38:09.169 | 99.99th=[38011] 00:38:09.169 bw ( KiB/s): min= 2549, max= 2688, per=4.14%, avg=2666.79, stdev=47.05, samples=19 00:38:09.169 iops : min= 637, max= 672, avg=666.58, stdev=11.76, samples=19 00:38:09.169 lat (msec) : 10=0.10%, 20=1.39%, 50=98.50% 00:38:09.169 cpu : usr=98.73%, sys=0.88%, ctx=82, majf=0, minf=28 00:38:09.169 IO depths : 1=3.7%, 2=9.3%, 4=23.3%, 8=54.4%, 16=9.3%, 32=0.0%, >=64=0.0% 00:38:09.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.169 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:09.169 00:38:09.169 Run status group 0 (all jobs): 00:38:09.169 READ: bw=62.9MiB/s (66.0MB/s), 2642KiB/s-2886KiB/s (2705kB/s-2956kB/s), io=630MiB (661MB), run=10004-10015msec 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 bdev_null0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.169 [2024-11-20 10:55:40.277997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:09.169 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.170 bdev_null1 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:09.170 { 00:38:09.170 "params": { 00:38:09.170 "name": "Nvme$subsystem", 00:38:09.170 "trtype": "$TEST_TRANSPORT", 00:38:09.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.170 "adrfam": "ipv4", 00:38:09.170 "trsvcid": "$NVMF_PORT", 00:38:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.170 "hdgst": ${hdgst:-false}, 00:38:09.170 "ddgst": ${ddgst:-false} 00:38:09.170 }, 00:38:09.170 "method": "bdev_nvme_attach_controller" 00:38:09.170 } 00:38:09.170 EOF 00:38:09.170 )") 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:09.170 { 00:38:09.170 "params": { 00:38:09.170 "name": "Nvme$subsystem", 00:38:09.170 "trtype": "$TEST_TRANSPORT", 00:38:09.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.170 "adrfam": "ipv4", 00:38:09.170 "trsvcid": "$NVMF_PORT", 00:38:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.170 "hdgst": ${hdgst:-false}, 00:38:09.170 "ddgst": ${ddgst:-false} 00:38:09.170 }, 00:38:09.170 "method": "bdev_nvme_attach_controller" 00:38:09.170 } 00:38:09.170 EOF 00:38:09.170 )") 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:09.170 "params": { 00:38:09.170 "name": "Nvme0", 00:38:09.170 "trtype": "tcp", 00:38:09.170 "traddr": "10.0.0.2", 00:38:09.170 "adrfam": "ipv4", 00:38:09.170 "trsvcid": "4420", 00:38:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.170 "hdgst": false, 00:38:09.170 "ddgst": false 00:38:09.170 }, 00:38:09.170 "method": "bdev_nvme_attach_controller" 00:38:09.170 },{ 00:38:09.170 "params": { 00:38:09.170 "name": "Nvme1", 00:38:09.170 "trtype": "tcp", 00:38:09.170 "traddr": "10.0.0.2", 00:38:09.170 "adrfam": "ipv4", 00:38:09.170 "trsvcid": "4420", 00:38:09.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.170 "hdgst": false, 00:38:09.170 "ddgst": false 00:38:09.170 }, 00:38:09.170 "method": "bdev_nvme_attach_controller" 00:38:09.170 }' 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:09.170 10:55:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:09.170 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:09.170 ... 00:38:09.170 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:09.170 ... 00:38:09.170 fio-3.35 00:38:09.170 Starting 4 threads 00:38:14.448 00:38:14.448 filename0: (groupid=0, jobs=1): err= 0: pid=2365531: Wed Nov 20 10:55:46 2024 00:38:14.448 read: IOPS=2937, BW=22.9MiB/s (24.1MB/s)(115MiB/5002msec) 00:38:14.448 slat (nsec): min=5397, max=49518, avg=8138.53, stdev=2625.94 00:38:14.448 clat (usec): min=1535, max=6151, avg=2701.96, stdev=215.97 00:38:14.448 lat (usec): min=1540, max=6182, avg=2710.10, stdev=216.04 00:38:14.448 clat percentiles (usec): 00:38:14.448 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2606], 00:38:14.448 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:38:14.448 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2900], 95.00th=[ 2933], 00:38:14.448 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4359], 99.95th=[ 4817], 00:38:14.448 | 99.99th=[ 6063] 00:38:14.448 bw ( KiB/s): min=23232, max=23696, per=24.76%, avg=23488.00, stdev=145.33, samples=9 00:38:14.448 iops : min= 2904, max= 2962, avg=2936.00, stdev=18.17, samples=9 00:38:14.448 lat (msec) : 2=0.44%, 4=99.22%, 10=0.34% 00:38:14.448 cpu : usr=96.16%, sys=3.58%, ctx=6, majf=0, minf=59 00:38:14.448 IO depths : 1=0.1%, 2=0.1%, 4=71.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 issued rwts: total=14692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:14.448 filename0: (groupid=0, jobs=1): err= 0: pid=2365532: Wed Nov 20 10:55:46 2024 00:38:14.448 read: IOPS=2924, BW=22.8MiB/s (24.0MB/s)(114MiB/5001msec) 00:38:14.448 slat (nsec): min=7875, max=50851, avg=8644.35, stdev=2203.26 00:38:14.448 clat (usec): min=1613, max=4910, avg=2711.34, stdev=200.21 00:38:14.448 lat (usec): min=1621, max=4919, avg=2719.98, stdev=200.23 00:38:14.448 clat percentiles (usec): 00:38:14.448 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:38:14.448 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:38:14.448 | 70.00th=[ 2704], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 2966], 00:38:14.448 | 99.00th=[ 3490], 99.50th=[ 3818], 99.90th=[ 4293], 99.95th=[ 4359], 00:38:14.448 | 99.99th=[ 4883] 00:38:14.448 bw ( KiB/s): min=23150, max=23552, per=24.65%, avg=23386.44, stdev=138.06, samples=9 00:38:14.448 iops : min= 2893, max= 2944, avg=2923.22, stdev=17.42, samples=9 00:38:14.448 lat (msec) : 2=0.15%, 4=99.49%, 10=0.36% 00:38:14.448 cpu : usr=96.84%, sys=2.88%, ctx=14, majf=0, minf=48 00:38:14.448 IO depths : 1=0.1%, 2=0.1%, 4=73.5%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 issued rwts: total=14627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:14.448 filename1: (groupid=0, jobs=1): err= 0: pid=2365534: Wed Nov 20 10:55:46 2024 00:38:14.448 read: IOPS=2959, BW=23.1MiB/s (24.2MB/s)(116MiB/5003msec) 00:38:14.448 slat (nsec): min=5400, max=35074, avg=8578.96, stdev=2702.01 00:38:14.448 clat (usec): min=996, max=4431, avg=2680.67, stdev=242.40 00:38:14.448 lat (usec): min=1014, max=4439, avg=2689.25, stdev=242.06 00:38:14.448 clat percentiles (usec): 00:38:14.448 | 1.00th=[ 1909], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:38:14.448 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:38:14.448 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2933], 00:38:14.448 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4146], 99.95th=[ 4228], 00:38:14.448 | 99.99th=[ 4424] 00:38:14.448 bw ( KiB/s): min=23536, max=24160, per=25.04%, avg=23754.67, stdev=206.46, samples=9 00:38:14.448 iops : min= 2942, max= 3020, avg=2969.33, stdev=25.81, samples=9 00:38:14.448 lat (usec) : 1000=0.01% 00:38:14.448 lat (msec) : 2=1.19%, 4=98.57%, 10=0.24% 00:38:14.448 cpu : usr=96.28%, sys=3.44%, ctx=7, majf=0, minf=21 00:38:14.448 IO depths : 1=0.1%, 2=0.1%, 4=70.1%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 issued rwts: total=14807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:14.448 filename1: (groupid=0, jobs=1): err= 0: pid=2365535: Wed Nov 20 10:55:46 2024 00:38:14.448 read: IOPS=3040, BW=23.8MiB/s (24.9MB/s)(119MiB/5001msec) 00:38:14.448 slat (nsec): min=5408, max=37214, avg=7966.55, stdev=1999.87 00:38:14.448 clat (usec): min=1055, max=4602, avg=2611.97, stdev=331.13 00:38:14.448 lat (usec): min=1061, max=4610, avg=2619.94, stdev=331.25 00:38:14.448 clat percentiles (usec): 00:38:14.448 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2376], 00:38:14.448 | 30.00th=[ 2474], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:14.448 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 3359], 00:38:14.448 | 99.00th=[ 3687], 99.50th=[ 3851], 99.90th=[ 4080], 99.95th=[ 4228], 00:38:14.448 | 99.99th=[ 4621] 00:38:14.448 bw ( KiB/s): min=23904, max=24688, per=25.57%, avg=24257.78, stdev=311.74, samples=9 00:38:14.448 iops : min= 2988, max= 3086, avg=3032.22, stdev=38.97, samples=9 00:38:14.448 lat (msec) : 2=2.44%, 4=97.38%, 10=0.18% 00:38:14.448 cpu : usr=96.86%, sys=2.74%, ctx=129, majf=0, minf=50 00:38:14.448 IO depths : 1=0.1%, 2=0.2%, 4=68.5%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.448 issued rwts: total=15205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.448 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:14.448 00:38:14.448 Run status group 0 (all jobs): 00:38:14.448 READ: bw=92.6MiB/s (97.1MB/s), 22.8MiB/s-23.8MiB/s (24.0MB/s-24.9MB/s), io=464MiB (486MB), run=5001-5003msec 00:38:14.448 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:14.448 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:14.448 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.449 00:38:14.449 real 0m24.820s 00:38:14.449 user 5m18.839s 00:38:14.449 sys 0m4.572s 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:14.449 ************************************ 00:38:14.449 END TEST fio_dif_rand_params 00:38:14.449 ************************************ 00:38:14.449 10:55:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:14.449 10:55:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.449 10:55:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.449 10:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:14.709 ************************************ 00:38:14.709 START TEST fio_dif_digest 00:38:14.709 ************************************ 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.709 bdev_null0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:14.709 [2024-11-20 10:55:46.895117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.709 { 00:38:14.709 "params": { 00:38:14.709 "name": "Nvme$subsystem", 00:38:14.709 "trtype": "$TEST_TRANSPORT", 00:38:14.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.709 "adrfam": "ipv4", 00:38:14.709 "trsvcid": "$NVMF_PORT", 00:38:14.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.709 "hdgst": ${hdgst:-false}, 00:38:14.709 "ddgst": ${ddgst:-false} 00:38:14.709 }, 00:38:14.709 "method": "bdev_nvme_attach_controller" 00:38:14.709 } 00:38:14.709 EOF 00:38:14.709 )") 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:14.709 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.710 "params": { 00:38:14.710 "name": "Nvme0", 00:38:14.710 "trtype": "tcp", 00:38:14.710 "traddr": "10.0.0.2", 00:38:14.710 "adrfam": "ipv4", 00:38:14.710 "trsvcid": "4420", 00:38:14.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.710 "hdgst": true, 00:38:14.710 "ddgst": true 00:38:14.710 }, 00:38:14.710 "method": "bdev_nvme_attach_controller" 00:38:14.710 }' 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:14.710 10:55:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.968 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:14.968 ... 00:38:14.968 fio-3.35 00:38:14.968 Starting 3 threads 00:38:27.204 00:38:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=2367015: Wed Nov 20 10:55:57 2024 00:38:27.204 read: IOPS=396, BW=49.5MiB/s (51.9MB/s)(495MiB/10004msec) 00:38:27.204 slat (nsec): min=5762, max=34867, avg=7823.91, stdev=1451.34 00:38:27.204 clat (usec): min=4773, max=51202, avg=7564.92, stdev=1925.58 00:38:27.204 lat (usec): min=4782, max=51237, avg=7572.75, stdev=1925.98 00:38:27.204 clat percentiles (usec): 00:38:27.204 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 6063], 00:38:27.204 | 30.00th=[ 6521], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 7767], 00:38:27.204 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10290], 00:38:27.204 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12387], 99.95th=[51119], 00:38:27.204 | 99.99th=[51119] 00:38:27.204 bw ( KiB/s): min=41728, max=55808, per=52.51%, avg=50701.47, stdev=4136.22, samples=19 00:38:27.204 iops : min= 326, max= 436, avg=396.11, stdev=32.31, samples=19 00:38:27.204 lat (msec) : 10=92.78%, 20=7.14%, 100=0.08% 00:38:27.204 cpu : usr=93.44%, sys=6.05%, ctx=446, majf=0, minf=182 00:38:27.204 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 issued rwts: total=3962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.204 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=2367016: Wed Nov 20 10:55:57 2024 00:38:27.204 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(262MiB/10044msec) 00:38:27.204 slat (nsec): min=5780, max=42236, avg=6590.96, stdev=1437.15 00:38:27.204 clat (msec): min=5, max=129, avg=14.35, stdev=15.94 00:38:27.204 lat (msec): min=5, max=129, avg=14.36, stdev=15.94 00:38:27.204 clat percentiles (msec): 00:38:27.204 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:38:27.204 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:38:27.204 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 49], 95.00th=[ 50], 00:38:27.204 | 99.00th=[ 90], 99.50th=[ 90], 99.90th=[ 92], 99.95th=[ 92], 00:38:27.204 | 99.99th=[ 130] 00:38:27.204 bw ( KiB/s): min=13312, max=44800, per=27.75%, avg=26790.40, stdev=8980.68, samples=20 00:38:27.204 iops : min= 104, max= 350, avg=209.30, stdev=70.16, samples=20 00:38:27.204 lat (msec) : 10=71.41%, 20=15.56%, 50=8.83%, 100=4.15%, 250=0.05% 00:38:27.204 cpu : usr=95.38%, sys=4.40%, ctx=22, majf=0, minf=166 00:38:27.204 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.204 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=2367017: Wed Nov 20 10:55:57 2024 00:38:27.204 read: IOPS=151, BW=19.0MiB/s (19.9MB/s)(190MiB/10014msec) 00:38:27.204 slat (nsec): min=5964, max=34180, avg=8118.97, stdev=1790.57 00:38:27.204 clat (msec): min=6, max=130, avg=19.77, stdev=20.45 00:38:27.204 lat (msec): min=6, max=130, avg=19.77, stdev=20.45 00:38:27.204 clat percentiles (msec): 00:38:27.204 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:38:27.204 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:38:27.204 | 70.00th=[ 11], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 52], 00:38:27.204 | 99.00th=[ 91], 99.50th=[ 91], 99.90th=[ 93], 99.95th=[ 131], 00:38:27.204 | 99.99th=[ 131] 00:38:27.204 bw ( KiB/s): min=13056, max=28416, per=20.10%, avg=19404.80, stdev=3898.92, samples=20 00:38:27.204 iops : min= 102, max= 222, avg=151.60, stdev=30.46, samples=20 00:38:27.204 lat (msec) : 10=51.22%, 20=25.67%, 50=10.99%, 100=12.05%, 250=0.07% 00:38:27.204 cpu : usr=95.90%, sys=3.88%, ctx=15, majf=0, minf=80 00:38:27.204 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.204 issued rwts: total=1519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.204 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:27.204 00:38:27.204 Run status group 0 (all jobs): 00:38:27.204 READ: bw=94.3MiB/s (98.9MB/s), 19.0MiB/s-49.5MiB/s (19.9MB/s-51.9MB/s), io=947MiB (993MB), run=10004-10044msec 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.204 10:55:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:27.204 10:55:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.204 00:38:27.204 real 0m11.162s 00:38:27.204 user 0m44.725s 00:38:27.204 sys 0m1.798s 00:38:27.204 10:55:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.204 10:55:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:27.204 ************************************ 00:38:27.204 END TEST fio_dif_digest 00:38:27.204 ************************************ 00:38:27.204 10:55:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:27.204 10:55:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.204 rmmod nvme_tcp 00:38:27.204 rmmod nvme_fabrics 00:38:27.204 rmmod nvme_keyring 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2356542 ']' 00:38:27.204 10:55:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2356542 00:38:27.204 10:55:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2356542 ']' 00:38:27.204 10:55:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2356542 00:38:27.204 10:55:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356542 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356542' 00:38:27.205 killing process with pid 2356542 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2356542 00:38:27.205 10:55:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2356542 00:38:27.205 10:55:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:27.205 10:55:58 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:29.759 Waiting for block devices as requested 00:38:29.759 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:29.759 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:29.759 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:29.759 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:29.759 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:29.759 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:30.020 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:30.020 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:30.020 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:30.281 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:30.281 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:30.542 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:30.542 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:30.542 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:30.804 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:30.804 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:30.804 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:31.065 10:56:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.065 10:56:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:31.065 10:56:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.615 10:56:05 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:33.615 00:38:33.615 real 1m18.694s 00:38:33.615 user 8m6.489s 00:38:33.615 sys 0m21.589s 00:38:33.615 10:56:05 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.615 10:56:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:33.615 ************************************ 00:38:33.615 END TEST nvmf_dif 00:38:33.615 ************************************ 00:38:33.615 10:56:05 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:33.615 10:56:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:33.615 10:56:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.615 10:56:05 -- common/autotest_common.sh@10 -- # set +x 00:38:33.615 ************************************ 00:38:33.615 START TEST nvmf_abort_qd_sizes 00:38:33.615 ************************************ 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:33.615 * Looking for test storage... 00:38:33.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:33.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.615 --rc genhtml_branch_coverage=1 00:38:33.615 --rc genhtml_function_coverage=1 00:38:33.615 --rc genhtml_legend=1 00:38:33.615 --rc geninfo_all_blocks=1 00:38:33.615 --rc geninfo_unexecuted_blocks=1 00:38:33.615 00:38:33.615 ' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:33.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.615 --rc genhtml_branch_coverage=1 00:38:33.615 --rc genhtml_function_coverage=1 00:38:33.615 --rc genhtml_legend=1 00:38:33.615 --rc geninfo_all_blocks=1 00:38:33.615 --rc geninfo_unexecuted_blocks=1 00:38:33.615 00:38:33.615 ' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:33.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.615 --rc genhtml_branch_coverage=1 00:38:33.615 --rc genhtml_function_coverage=1 00:38:33.615 --rc genhtml_legend=1 00:38:33.615 --rc geninfo_all_blocks=1 00:38:33.615 --rc geninfo_unexecuted_blocks=1 00:38:33.615 00:38:33.615 ' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:33.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.615 --rc genhtml_branch_coverage=1 00:38:33.615 --rc genhtml_function_coverage=1 00:38:33.615 --rc genhtml_legend=1 00:38:33.615 --rc geninfo_all_blocks=1 00:38:33.615 --rc geninfo_unexecuted_blocks=1 00:38:33.615 00:38:33.615 ' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:33.615 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:33.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:33.616 10:56:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:41.762 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:41.762 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:41.762 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:41.762 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.762 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.763 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.763 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.763 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.763 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.763 10:56:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:38:41.763 00:38:41.763 --- 10.0.0.2 ping statistics --- 00:38:41.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.763 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:38:41.763 00:38:41.763 --- 10.0.0.1 ping statistics --- 00:38:41.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.763 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:41.763 10:56:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:45.067 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:45.067 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2376443 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2376443 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2376443 ']' 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.067 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.068 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.068 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.068 10:56:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:45.329 [2024-11-20 10:56:17.458987] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:38:45.329 [2024-11-20 10:56:17.459057] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:45.329 [2024-11-20 10:56:17.560628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:45.329 [2024-11-20 10:56:17.614821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:45.329 [2024-11-20 10:56:17.614879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:45.329 [2024-11-20 10:56:17.614889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:45.330 [2024-11-20 10:56:17.614896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:45.330 [2024-11-20 10:56:17.614903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:45.330 [2024-11-20 10:56:17.616964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.330 [2024-11-20 10:56:17.617124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:45.330 [2024-11-20 10:56:17.617286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:45.330 [2024-11-20 10:56:17.617438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.275 10:56:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:46.275 ************************************ 00:38:46.275 START TEST spdk_target_abort 00:38:46.275 ************************************ 00:38:46.275 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:46.275 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:46.275 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:46.275 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.275 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.536 spdk_targetn1 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.536 [2024-11-20 10:56:18.705310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.536 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.536 [2024-11-20 10:56:18.758196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:46.537 10:56:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.798 [2024-11-20 10:56:19.060754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:672 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:46.798 [2024-11-20 10:56:19.060811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0055 p:1 m:0 dnr:0 00:38:46.798 [2024-11-20 10:56:19.077838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1208 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:46.798 [2024-11-20 10:56:19.077871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0098 p:1 m:0 dnr:0 00:38:46.798 [2024-11-20 10:56:19.133449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2864 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:46.798 [2024-11-20 10:56:19.133484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:46.798 [2024-11-20 10:56:19.148800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3352 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:46.798 [2024-11-20 10:56:19.148831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a4 p:0 m:0 dnr:0 00:38:47.059 [2024-11-20 10:56:19.172837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:4024 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:47.059 [2024-11-20 10:56:19.172869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:38:50.364 Initializing NVMe Controllers 00:38:50.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:50.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:50.364 Initialization complete. Launching workers. 00:38:50.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11329, failed: 5 00:38:50.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2797, failed to submit 8537 00:38:50.364 success 711, unsuccessful 2086, failed 0 00:38:50.364 10:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:50.364 10:56:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:50.364 [2024-11-20 10:56:22.407436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.407477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:840 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.429894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:0076 p:1 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.460580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1704 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.460603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00d7 p:1 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.492346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2456 len:8 PRP1 0x200004e46000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.492368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.500243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2632 len:8 PRP1 0x200004e56000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.500263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.508107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:2720 len:8 PRP1 0x200004e54000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.508129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:50.364 [2024-11-20 10:56:22.559339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3752 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:38:50.364 [2024-11-20 10:56:22.559361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00df p:0 m:0 dnr:0 00:38:51.749 [2024-11-20 10:56:23.778892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:31216 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:38:51.749 [2024-11-20 10:56:23.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:52.320 [2024-11-20 10:56:24.623857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:50552 len:8 PRP1 0x200004e42000 PRP2 0x0 00:38:52.320 [2024-11-20 10:56:24.623880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00b6 p:1 m:0 dnr:0 00:38:53.262 Initializing NVMe Controllers 00:38:53.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:53.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:53.262 Initialization complete. Launching workers. 00:38:53.262 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8501, failed: 9 00:38:53.262 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7285 00:38:53.262 success 360, unsuccessful 865, failed 0 00:38:53.262 10:56:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:53.262 10:56:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:53.523 [2024-11-20 10:56:25.808809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:178 nsid:1 lba:1896 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:53.523 [2024-11-20 10:56:25.808833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:178 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:53.523 [2024-11-20 10:56:25.824027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:3712 len:8 PRP1 0x200004b06000 PRP2 0x0 00:38:53.523 [2024-11-20 10:56:25.824044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:001b p:1 m:0 dnr:0 00:38:54.093 [2024-11-20 10:56:26.186481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:148 nsid:1 lba:45544 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:54.093 [2024-11-20 10:56:26.186502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:148 cdw0:0 sqhd:0085 p:1 m:0 dnr:0 00:38:54.380 [2024-11-20 10:56:26.671698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:178 nsid:1 lba:102168 len:8 PRP1 0x200004ad4000 PRP2 0x0 00:38:54.380 [2024-11-20 10:56:26.671719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:178 cdw0:0 sqhd:002d p:1 m:0 dnr:0 00:38:56.291 [2024-11-20 10:56:28.148463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:272592 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:56.291 [2024-11-20 10:56:28.148489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:56.551 [2024-11-20 10:56:28.694693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:336288 len:8 PRP1 0x200004b0e000 PRP2 0x0 00:38:56.551 [2024-11-20 10:56:28.694717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:0085 p:1 m:0 dnr:0 00:38:56.551 Initializing NVMe Controllers 00:38:56.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:56.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:56.551 Initialization complete. Launching workers. 00:38:56.551 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43566, failed: 6 00:38:56.551 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2743, failed to submit 40829 00:38:56.551 success 612, unsuccessful 2131, failed 0 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.551 10:56:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2376443 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2376443 ']' 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2376443 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376443 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376443' 00:38:58.464 killing process with pid 2376443 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2376443 00:38:58.464 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2376443 00:38:58.726 00:38:58.726 real 0m12.470s 00:38:58.726 user 0m50.749s 00:38:58.726 sys 0m2.103s 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.726 ************************************ 00:38:58.726 END TEST spdk_target_abort 00:38:58.726 ************************************ 00:38:58.726 10:56:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:58.726 10:56:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:58.726 10:56:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.726 10:56:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:58.726 ************************************ 00:38:58.726 START TEST kernel_target_abort 00:38:58.726 ************************************ 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:58.726 10:56:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:02.042 Waiting for block devices as requested 00:39:02.042 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:02.302 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:02.302 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:02.302 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:02.570 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:02.570 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:02.570 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:02.570 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:02.855 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:02.855 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:03.136 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:03.136 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:03.136 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:03.412 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:03.412 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:03.412 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:03.690 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:04.014 No valid GPT data, bailing 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:04.014 00:39:04.014 Discovery Log Number of Records 2, Generation counter 2 00:39:04.014 =====Discovery Log Entry 0====== 00:39:04.014 trtype: tcp 00:39:04.014 adrfam: ipv4 00:39:04.014 subtype: current discovery subsystem 00:39:04.014 treq: not specified, sq flow control disable supported 00:39:04.014 portid: 1 00:39:04.014 trsvcid: 4420 00:39:04.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:04.014 traddr: 10.0.0.1 00:39:04.014 eflags: none 00:39:04.014 sectype: none 00:39:04.014 =====Discovery Log Entry 1====== 00:39:04.014 trtype: tcp 00:39:04.014 adrfam: ipv4 00:39:04.014 subtype: nvme subsystem 00:39:04.014 treq: not specified, sq flow control disable supported 00:39:04.014 portid: 1 00:39:04.014 trsvcid: 4420 00:39:04.014 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:04.014 traddr: 10.0.0.1 00:39:04.014 eflags: none 00:39:04.014 sectype: none 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:04.014 10:56:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:07.315 Initializing NVMe Controllers 00:39:07.315 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:07.315 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:07.315 Initialization complete. Launching workers. 00:39:07.315 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68148, failed: 0 00:39:07.315 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68148, failed to submit 0 00:39:07.315 success 0, unsuccessful 68148, failed 0 00:39:07.315 10:56:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:07.315 10:56:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:10.615 Initializing NVMe Controllers 00:39:10.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:10.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:10.615 Initialization complete. Launching workers. 00:39:10.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117218, failed: 0 00:39:10.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29470, failed to submit 87748 00:39:10.615 success 0, unsuccessful 29470, failed 0 00:39:10.615 10:56:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:10.615 10:56:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:13.919 Initializing NVMe Controllers 00:39:13.919 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:13.919 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:13.919 Initialization complete. Launching workers. 00:39:13.919 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146037, failed: 0 00:39:13.919 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36550, failed to submit 109487 00:39:13.919 success 0, unsuccessful 36550, failed 0 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:13.919 10:56:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:17.224 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:17.224 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:19.136 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:19.136 00:39:19.136 real 0m20.432s 00:39:19.136 user 0m9.992s 00:39:19.136 sys 0m6.084s 00:39:19.136 10:56:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.136 10:56:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:19.136 ************************************ 00:39:19.137 END TEST kernel_target_abort 00:39:19.137 ************************************ 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.137 rmmod nvme_tcp 00:39:19.137 rmmod nvme_fabrics 00:39:19.137 rmmod nvme_keyring 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2376443 ']' 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2376443 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2376443 ']' 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2376443 00:39:19.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2376443) - No such process 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2376443 is not found' 00:39:19.137 Process with pid 2376443 is not found 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:19.137 10:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:22.440 Waiting for block devices as requested 00:39:22.702 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:22.702 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:22.702 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:22.702 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:22.963 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:22.963 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:22.963 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:22.963 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:23.223 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:23.223 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:23.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:23.484 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:23.484 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:23.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:23.744 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:23.744 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:24.005 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:24.266 10:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.181 10:56:58 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.181 00:39:26.181 real 0m52.995s 00:39:26.181 user 1m6.233s 00:39:26.181 sys 0m19.493s 00:39:26.181 10:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.181 10:56:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:26.181 ************************************ 00:39:26.181 END TEST nvmf_abort_qd_sizes 00:39:26.181 ************************************ 00:39:26.442 10:56:58 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:26.442 10:56:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.442 10:56:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.442 10:56:58 -- common/autotest_common.sh@10 -- # set +x 00:39:26.442 ************************************ 00:39:26.442 START TEST keyring_file 00:39:26.442 ************************************ 00:39:26.442 10:56:58 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:26.442 * Looking for test storage... 00:39:26.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:26.442 10:56:58 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:26.442 10:56:58 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:26.442 10:56:58 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:26.442 10:56:58 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.442 10:56:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.703 10:56:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:26.703 10:56:58 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.703 10:56:58 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.703 --rc genhtml_branch_coverage=1 00:39:26.703 --rc genhtml_function_coverage=1 00:39:26.703 --rc genhtml_legend=1 00:39:26.703 --rc geninfo_all_blocks=1 00:39:26.703 --rc geninfo_unexecuted_blocks=1 00:39:26.703 00:39:26.703 ' 00:39:26.703 10:56:58 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.703 --rc genhtml_branch_coverage=1 00:39:26.703 --rc genhtml_function_coverage=1 00:39:26.703 --rc genhtml_legend=1 00:39:26.703 --rc geninfo_all_blocks=1 00:39:26.703 --rc geninfo_unexecuted_blocks=1 00:39:26.703 00:39:26.703 ' 00:39:26.703 10:56:58 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:26.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.703 --rc genhtml_branch_coverage=1 00:39:26.703 --rc genhtml_function_coverage=1 00:39:26.703 --rc genhtml_legend=1 00:39:26.703 --rc geninfo_all_blocks=1 00:39:26.703 --rc geninfo_unexecuted_blocks=1 00:39:26.703 00:39:26.703 ' 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.704 --rc genhtml_branch_coverage=1 00:39:26.704 --rc genhtml_function_coverage=1 00:39:26.704 --rc genhtml_legend=1 00:39:26.704 --rc geninfo_all_blocks=1 00:39:26.704 --rc geninfo_unexecuted_blocks=1 00:39:26.704 00:39:26.704 ' 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.704 10:56:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:26.704 10:56:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.704 10:56:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.704 10:56:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.704 10:56:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.704 10:56:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.704 10:56:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.704 10:56:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:26.704 10:56:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:26.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vUKQRP9AOs 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vUKQRP9AOs 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vUKQRP9AOs 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vUKQRP9AOs 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uVTrHd8J1t 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:26.704 10:56:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uVTrHd8J1t 00:39:26.704 10:56:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uVTrHd8J1t 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uVTrHd8J1t 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=2386925 00:39:26.704 10:56:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2386925 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2386925 ']' 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.704 10:56:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.704 [2024-11-20 10:56:59.028412] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:39:26.704 [2024-11-20 10:56:59.028484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2386925 ] 00:39:26.967 [2024-11-20 10:56:59.122437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.967 [2024-11-20 10:56:59.178184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.541 10:56:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.541 10:56:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:27.541 10:56:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:27.541 10:56:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.541 10:56:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:27.541 [2024-11-20 10:56:59.863983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:27.541 null0 00:39:27.541 [2024-11-20 10:56:59.896025] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:27.541 [2024-11-20 10:56:59.896423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.803 10:56:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:27.803 [2024-11-20 10:56:59.928097] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:27.803 request: 00:39:27.803 { 00:39:27.803 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:27.803 "secure_channel": false, 00:39:27.803 "listen_address": { 00:39:27.803 "trtype": "tcp", 00:39:27.803 "traddr": "127.0.0.1", 00:39:27.803 "trsvcid": "4420" 00:39:27.803 }, 00:39:27.803 "method": "nvmf_subsystem_add_listener", 00:39:27.803 "req_id": 1 00:39:27.803 } 00:39:27.803 Got JSON-RPC error response 00:39:27.803 response: 00:39:27.803 { 00:39:27.803 "code": -32602, 00:39:27.803 "message": "Invalid parameters" 00:39:27.803 } 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:27.803 10:56:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=2387001 00:39:27.803 10:56:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2387001 /var/tmp/bperf.sock 00:39:27.803 10:56:59 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2387001 ']' 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:27.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:27.803 10:56:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:27.803 [2024-11-20 10:56:59.987154] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:39:27.803 [2024-11-20 10:56:59.987209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387001 ] 00:39:27.803 [2024-11-20 10:57:00.073113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.803 [2024-11-20 10:57:00.110139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.746 10:57:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.746 10:57:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:28.746 10:57:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:28.746 10:57:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:28.746 10:57:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uVTrHd8J1t 00:39:28.746 10:57:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uVTrHd8J1t 00:39:29.006 10:57:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:29.006 10:57:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:29.006 10:57:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vUKQRP9AOs == \/\t\m\p\/\t\m\p\.\v\U\K\Q\R\P\9\A\O\s ]] 00:39:29.006 10:57:01 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:29.006 10:57:01 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.006 10:57:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:29.267 10:57:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uVTrHd8J1t == \/\t\m\p\/\t\m\p\.\u\V\T\r\H\d\8\J\1\t ]] 00:39:29.267 10:57:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:29.267 10:57:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:29.267 10:57:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:29.267 10:57:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.267 10:57:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:29.267 10:57:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.528 10:57:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:29.528 10:57:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:29.528 10:57:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:29.528 10:57:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:29.528 10:57:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:29.790 [2024-11-20 10:57:02.065856] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:29.790 nvme0n1 00:39:29.790 10:57:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:30.051 10:57:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:30.051 10:57:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:30.051 10:57:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:30.311 10:57:02 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:30.311 10:57:02 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:30.311 Running I/O for 1 seconds... 00:39:31.512 19148.00 IOPS, 74.80 MiB/s 00:39:31.512 Latency(us) 00:39:31.512 [2024-11-20T09:57:03.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.512 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:31.512 nvme0n1 : 1.00 19204.77 75.02 0.00 0.00 6653.21 2293.76 14199.47 00:39:31.512 [2024-11-20T09:57:03.888Z] =================================================================================================================== 00:39:31.512 [2024-11-20T09:57:03.888Z] Total : 19204.77 75.02 0.00 0.00 6653.21 2293.76 14199.47 00:39:31.512 { 00:39:31.512 "results": [ 00:39:31.512 { 00:39:31.512 "job": "nvme0n1", 00:39:31.512 "core_mask": "0x2", 00:39:31.512 "workload": "randrw", 00:39:31.512 "percentage": 50, 00:39:31.512 "status": "finished", 00:39:31.512 "queue_depth": 128, 00:39:31.512 "io_size": 4096, 00:39:31.512 "runtime": 1.003761, 00:39:31.512 "iops": 19204.770856807547, 00:39:31.512 "mibps": 75.01863615940448, 00:39:31.512 "io_failed": 0, 00:39:31.512 "io_timeout": 0, 00:39:31.512 "avg_latency_us": 6653.207288824333, 00:39:31.512 "min_latency_us": 2293.76, 00:39:31.512 "max_latency_us": 14199.466666666667 00:39:31.512 } 00:39:31.512 ], 00:39:31.512 "core_count": 1 00:39:31.512 } 00:39:31.512 10:57:03 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:31.512 10:57:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:31.512 10:57:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:31.513 10:57:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:31.513 10:57:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:31.513 10:57:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:31.513 10:57:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:31.513 10:57:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:31.774 10:57:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:31.774 10:57:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:31.774 10:57:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:31.774 10:57:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:31.774 10:57:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:31.774 10:57:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:31.774 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:32.035 10:57:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:32.035 10:57:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:32.035 10:57:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:32.035 10:57:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:32.036 [2024-11-20 10:57:04.339119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:32.036 [2024-11-20 10:57:04.339885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a0c10 (107): Transport endpoint is not connected 00:39:32.036 [2024-11-20 10:57:04.340881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a0c10 (9): Bad file descriptor 00:39:32.036 [2024-11-20 10:57:04.341883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:32.036 [2024-11-20 10:57:04.341890] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:32.036 [2024-11-20 10:57:04.341896] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:32.036 [2024-11-20 10:57:04.341903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:32.036 request: 00:39:32.036 { 00:39:32.036 "name": "nvme0", 00:39:32.036 "trtype": "tcp", 00:39:32.036 "traddr": "127.0.0.1", 00:39:32.036 "adrfam": "ipv4", 00:39:32.036 "trsvcid": "4420", 00:39:32.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:32.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:32.036 "prchk_reftag": false, 00:39:32.036 "prchk_guard": false, 00:39:32.036 "hdgst": false, 00:39:32.036 "ddgst": false, 00:39:32.036 "psk": "key1", 00:39:32.036 "allow_unrecognized_csi": false, 00:39:32.036 "method": "bdev_nvme_attach_controller", 00:39:32.036 "req_id": 1 00:39:32.036 } 00:39:32.036 Got JSON-RPC error response 00:39:32.036 response: 00:39:32.036 { 00:39:32.036 "code": -5, 00:39:32.036 "message": "Input/output error" 00:39:32.036 } 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:32.036 10:57:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:32.036 10:57:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:32.036 10:57:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:32.297 10:57:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:32.297 10:57:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:32.297 10:57:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:32.297 10:57:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:32.297 10:57:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:32.297 10:57:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:32.297 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:32.557 10:57:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:32.557 10:57:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:32.557 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:32.557 10:57:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:32.557 10:57:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:32.817 10:57:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:32.817 10:57:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:32.817 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:33.077 10:57:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:33.077 10:57:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vUKQRP9AOs 00:39:33.077 10:57:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:33.077 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:33.078 10:57:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.078 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.338 [2024-11-20 10:57:05.452752] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vUKQRP9AOs': 0100660 00:39:33.338 [2024-11-20 10:57:05.452771] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:33.338 request: 00:39:33.338 { 00:39:33.338 "name": "key0", 00:39:33.338 "path": "/tmp/tmp.vUKQRP9AOs", 00:39:33.338 "method": "keyring_file_add_key", 00:39:33.338 "req_id": 1 00:39:33.338 } 00:39:33.338 Got JSON-RPC error response 00:39:33.338 response: 00:39:33.338 { 00:39:33.338 "code": -1, 00:39:33.338 "message": "Operation not permitted" 00:39:33.338 } 00:39:33.338 10:57:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:33.338 10:57:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:33.338 10:57:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:33.338 10:57:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:33.338 10:57:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vUKQRP9AOs 00:39:33.338 10:57:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vUKQRP9AOs 00:39:33.338 10:57:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vUKQRP9AOs 00:39:33.338 10:57:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:33.338 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:33.600 10:57:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:33.600 10:57:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:33.600 10:57:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:33.600 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:33.600 [2024-11-20 10:57:05.958043] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vUKQRP9AOs': No such file or directory 00:39:33.600 [2024-11-20 10:57:05.958056] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:33.600 [2024-11-20 10:57:05.958069] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:33.600 [2024-11-20 10:57:05.958074] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:33.600 [2024-11-20 10:57:05.958080] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:33.600 [2024-11-20 10:57:05.958085] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:33.600 request: 00:39:33.600 { 00:39:33.600 "name": "nvme0", 00:39:33.600 "trtype": "tcp", 00:39:33.600 "traddr": "127.0.0.1", 00:39:33.600 "adrfam": "ipv4", 00:39:33.600 "trsvcid": "4420", 00:39:33.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:33.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:33.600 "prchk_reftag": false, 00:39:33.600 "prchk_guard": false, 00:39:33.600 "hdgst": false, 00:39:33.600 "ddgst": false, 00:39:33.600 "psk": "key0", 00:39:33.600 "allow_unrecognized_csi": false, 00:39:33.600 "method": "bdev_nvme_attach_controller", 00:39:33.600 "req_id": 1 00:39:33.600 } 00:39:33.600 Got JSON-RPC error response 00:39:33.600 response: 00:39:33.600 { 00:39:33.600 "code": -19, 00:39:33.600 "message": "No such device" 00:39:33.600 } 00:39:33.862 10:57:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:33.862 10:57:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:33.862 10:57:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:33.862 10:57:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:33.862 10:57:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:33.862 10:57:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:33.862 10:57:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dPLIAzcxn2 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:33.862 10:57:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dPLIAzcxn2 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dPLIAzcxn2 00:39:33.862 10:57:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.dPLIAzcxn2 00:39:33.862 10:57:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dPLIAzcxn2 00:39:33.862 10:57:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dPLIAzcxn2 00:39:34.124 10:57:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:34.124 10:57:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:34.385 nvme0n1 00:39:34.385 10:57:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:34.385 10:57:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:34.386 10:57:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:34.386 10:57:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:34.386 10:57:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:34.386 10:57:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:34.647 10:57:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:34.647 10:57:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:34.647 10:57:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:34.647 10:57:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:34.647 10:57:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:34.647 10:57:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:34.647 10:57:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:34.647 10:57:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:34.908 10:57:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:34.908 10:57:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:34.908 10:57:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:34.908 10:57:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:34.908 10:57:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:34.908 10:57:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:34.908 10:57:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:35.169 10:57:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:35.169 10:57:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:35.169 10:57:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:35.169 10:57:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:35.169 10:57:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:35.169 10:57:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:35.430 10:57:07 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:35.430 10:57:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dPLIAzcxn2 00:39:35.430 10:57:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dPLIAzcxn2 00:39:35.692 10:57:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uVTrHd8J1t 00:39:35.692 10:57:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uVTrHd8J1t 00:39:35.692 10:57:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:35.692 10:57:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:35.952 nvme0n1 00:39:35.952 10:57:08 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:35.952 10:57:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:36.213 10:57:08 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:36.213 "subsystems": [ 00:39:36.213 { 00:39:36.213 "subsystem": "keyring", 00:39:36.213 "config": [ 00:39:36.213 { 00:39:36.213 "method": "keyring_file_add_key", 00:39:36.213 "params": { 00:39:36.213 "name": "key0", 00:39:36.213 "path": "/tmp/tmp.dPLIAzcxn2" 00:39:36.213 } 00:39:36.213 }, 00:39:36.213 { 00:39:36.213 "method": "keyring_file_add_key", 00:39:36.213 "params": { 00:39:36.213 "name": "key1", 00:39:36.213 "path": "/tmp/tmp.uVTrHd8J1t" 00:39:36.213 } 00:39:36.213 } 00:39:36.213 ] 00:39:36.213 }, 00:39:36.213 { 00:39:36.213 "subsystem": "iobuf", 00:39:36.213 "config": [ 00:39:36.213 { 00:39:36.213 "method": "iobuf_set_options", 00:39:36.213 "params": { 00:39:36.213 "small_pool_count": 8192, 00:39:36.213 "large_pool_count": 1024, 00:39:36.213 "small_bufsize": 8192, 00:39:36.213 "large_bufsize": 135168, 00:39:36.213 "enable_numa": false 00:39:36.213 } 00:39:36.213 } 00:39:36.213 ] 00:39:36.213 }, 00:39:36.213 { 00:39:36.213 "subsystem": "sock", 00:39:36.213 "config": [ 00:39:36.213 { 00:39:36.213 "method": "sock_set_default_impl", 00:39:36.213 "params": { 00:39:36.213 "impl_name": "posix" 00:39:36.213 } 00:39:36.213 }, 00:39:36.213 { 00:39:36.213 "method": "sock_impl_set_options", 00:39:36.213 "params": { 00:39:36.213 "impl_name": "ssl", 00:39:36.213 "recv_buf_size": 4096, 00:39:36.213 "send_buf_size": 4096, 00:39:36.213 "enable_recv_pipe": true, 00:39:36.213 "enable_quickack": false, 00:39:36.213 "enable_placement_id": 0, 00:39:36.213 "enable_zerocopy_send_server": true, 00:39:36.213 "enable_zerocopy_send_client": false, 00:39:36.213 "zerocopy_threshold": 0, 00:39:36.213 "tls_version": 0, 00:39:36.213 "enable_ktls": false 00:39:36.213 } 00:39:36.213 }, 00:39:36.213 { 00:39:36.214 "method": "sock_impl_set_options", 00:39:36.214 "params": { 00:39:36.214 "impl_name": "posix", 00:39:36.214 "recv_buf_size": 2097152, 00:39:36.214 "send_buf_size": 2097152, 00:39:36.214 "enable_recv_pipe": true, 00:39:36.214 "enable_quickack": false, 00:39:36.214 "enable_placement_id": 0, 00:39:36.214 "enable_zerocopy_send_server": true, 00:39:36.214 "enable_zerocopy_send_client": false, 00:39:36.214 "zerocopy_threshold": 0, 00:39:36.214 "tls_version": 0, 00:39:36.214 "enable_ktls": false 00:39:36.214 } 00:39:36.214 } 00:39:36.214 ] 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "subsystem": "vmd", 00:39:36.214 "config": [] 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "subsystem": "accel", 00:39:36.214 "config": [ 00:39:36.214 { 00:39:36.214 "method": "accel_set_options", 00:39:36.214 "params": { 00:39:36.214 "small_cache_size": 128, 00:39:36.214 "large_cache_size": 16, 00:39:36.214 "task_count": 2048, 00:39:36.214 "sequence_count": 2048, 00:39:36.214 "buf_count": 2048 00:39:36.214 } 00:39:36.214 } 00:39:36.214 ] 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "subsystem": "bdev", 00:39:36.214 "config": [ 00:39:36.214 { 00:39:36.214 "method": "bdev_set_options", 00:39:36.214 "params": { 00:39:36.214 "bdev_io_pool_size": 65535, 00:39:36.214 "bdev_io_cache_size": 256, 00:39:36.214 "bdev_auto_examine": true, 00:39:36.214 "iobuf_small_cache_size": 128, 00:39:36.214 "iobuf_large_cache_size": 16 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_raid_set_options", 00:39:36.214 "params": { 00:39:36.214 "process_window_size_kb": 1024, 00:39:36.214 "process_max_bandwidth_mb_sec": 0 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_iscsi_set_options", 00:39:36.214 "params": { 00:39:36.214 "timeout_sec": 30 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_nvme_set_options", 00:39:36.214 "params": { 00:39:36.214 "action_on_timeout": "none", 00:39:36.214 "timeout_us": 0, 00:39:36.214 "timeout_admin_us": 0, 00:39:36.214 "keep_alive_timeout_ms": 10000, 00:39:36.214 "arbitration_burst": 0, 00:39:36.214 "low_priority_weight": 0, 00:39:36.214 "medium_priority_weight": 0, 00:39:36.214 "high_priority_weight": 0, 00:39:36.214 "nvme_adminq_poll_period_us": 10000, 00:39:36.214 "nvme_ioq_poll_period_us": 0, 00:39:36.214 "io_queue_requests": 512, 00:39:36.214 "delay_cmd_submit": true, 00:39:36.214 "transport_retry_count": 4, 00:39:36.214 "bdev_retry_count": 3, 00:39:36.214 "transport_ack_timeout": 0, 00:39:36.214 "ctrlr_loss_timeout_sec": 0, 00:39:36.214 "reconnect_delay_sec": 0, 00:39:36.214 "fast_io_fail_timeout_sec": 0, 00:39:36.214 "disable_auto_failback": false, 00:39:36.214 "generate_uuids": false, 00:39:36.214 "transport_tos": 0, 00:39:36.214 "nvme_error_stat": false, 00:39:36.214 "rdma_srq_size": 0, 00:39:36.214 "io_path_stat": false, 00:39:36.214 "allow_accel_sequence": false, 00:39:36.214 "rdma_max_cq_size": 0, 00:39:36.214 "rdma_cm_event_timeout_ms": 0, 00:39:36.214 "dhchap_digests": [ 00:39:36.214 "sha256", 00:39:36.214 "sha384", 00:39:36.214 "sha512" 00:39:36.214 ], 00:39:36.214 "dhchap_dhgroups": [ 00:39:36.214 "null", 00:39:36.214 "ffdhe2048", 00:39:36.214 "ffdhe3072", 00:39:36.214 "ffdhe4096", 00:39:36.214 "ffdhe6144", 00:39:36.214 "ffdhe8192" 00:39:36.214 ] 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_nvme_attach_controller", 00:39:36.214 "params": { 00:39:36.214 "name": "nvme0", 00:39:36.214 "trtype": "TCP", 00:39:36.214 "adrfam": "IPv4", 00:39:36.214 "traddr": "127.0.0.1", 00:39:36.214 "trsvcid": "4420", 00:39:36.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.214 "prchk_reftag": false, 00:39:36.214 "prchk_guard": false, 00:39:36.214 "ctrlr_loss_timeout_sec": 0, 00:39:36.214 "reconnect_delay_sec": 0, 00:39:36.214 "fast_io_fail_timeout_sec": 0, 00:39:36.214 "psk": "key0", 00:39:36.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:36.214 "hdgst": false, 00:39:36.214 "ddgst": false, 00:39:36.214 "multipath": "multipath" 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_nvme_set_hotplug", 00:39:36.214 "params": { 00:39:36.214 "period_us": 100000, 00:39:36.214 "enable": false 00:39:36.214 } 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "method": "bdev_wait_for_examine" 00:39:36.214 } 00:39:36.214 ] 00:39:36.214 }, 00:39:36.214 { 00:39:36.214 "subsystem": "nbd", 00:39:36.214 "config": [] 00:39:36.214 } 00:39:36.214 ] 00:39:36.214 }' 00:39:36.214 10:57:08 keyring_file -- keyring/file.sh@115 -- # killprocess 2387001 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2387001 ']' 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2387001 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2387001 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2387001' 00:39:36.214 killing process with pid 2387001 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@973 -- # kill 2387001 00:39:36.214 Received shutdown signal, test time was about 1.000000 seconds 00:39:36.214 00:39:36.214 Latency(us) 00:39:36.214 [2024-11-20T09:57:08.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.214 [2024-11-20T09:57:08.590Z] =================================================================================================================== 00:39:36.214 [2024-11-20T09:57:08.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:36.214 10:57:08 keyring_file -- common/autotest_common.sh@978 -- # wait 2387001 00:39:36.475 10:57:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=2388918 00:39:36.475 10:57:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2388918 /var/tmp/bperf.sock 00:39:36.475 10:57:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2388918 ']' 00:39:36.475 10:57:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:36.475 10:57:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:36.475 10:57:08 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:36.475 10:57:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:36.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:36.476 10:57:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:36.476 10:57:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:36.476 10:57:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:36.476 "subsystems": [ 00:39:36.476 { 00:39:36.476 "subsystem": "keyring", 00:39:36.476 "config": [ 00:39:36.476 { 00:39:36.476 "method": "keyring_file_add_key", 00:39:36.476 "params": { 00:39:36.476 "name": "key0", 00:39:36.476 "path": "/tmp/tmp.dPLIAzcxn2" 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "keyring_file_add_key", 00:39:36.476 "params": { 00:39:36.476 "name": "key1", 00:39:36.476 "path": "/tmp/tmp.uVTrHd8J1t" 00:39:36.476 } 00:39:36.476 } 00:39:36.476 ] 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "subsystem": "iobuf", 00:39:36.476 "config": [ 00:39:36.476 { 00:39:36.476 "method": "iobuf_set_options", 00:39:36.476 "params": { 00:39:36.476 "small_pool_count": 8192, 00:39:36.476 "large_pool_count": 1024, 00:39:36.476 "small_bufsize": 8192, 00:39:36.476 "large_bufsize": 135168, 00:39:36.476 "enable_numa": false 00:39:36.476 } 00:39:36.476 } 00:39:36.476 ] 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "subsystem": "sock", 00:39:36.476 "config": [ 00:39:36.476 { 00:39:36.476 "method": "sock_set_default_impl", 00:39:36.476 "params": { 00:39:36.476 "impl_name": "posix" 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "sock_impl_set_options", 00:39:36.476 "params": { 00:39:36.476 "impl_name": "ssl", 00:39:36.476 "recv_buf_size": 4096, 00:39:36.476 "send_buf_size": 4096, 00:39:36.476 "enable_recv_pipe": true, 00:39:36.476 "enable_quickack": false, 00:39:36.476 "enable_placement_id": 0, 00:39:36.476 "enable_zerocopy_send_server": true, 00:39:36.476 "enable_zerocopy_send_client": false, 00:39:36.476 "zerocopy_threshold": 0, 00:39:36.476 "tls_version": 0, 00:39:36.476 "enable_ktls": false 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "sock_impl_set_options", 00:39:36.476 "params": { 00:39:36.476 "impl_name": "posix", 00:39:36.476 "recv_buf_size": 2097152, 00:39:36.476 "send_buf_size": 2097152, 00:39:36.476 "enable_recv_pipe": true, 00:39:36.476 "enable_quickack": false, 00:39:36.476 "enable_placement_id": 0, 00:39:36.476 "enable_zerocopy_send_server": true, 00:39:36.476 "enable_zerocopy_send_client": false, 00:39:36.476 "zerocopy_threshold": 0, 00:39:36.476 "tls_version": 0, 00:39:36.476 "enable_ktls": false 00:39:36.476 } 00:39:36.476 } 00:39:36.476 ] 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "subsystem": "vmd", 00:39:36.476 "config": [] 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "subsystem": "accel", 00:39:36.476 "config": [ 00:39:36.476 { 00:39:36.476 "method": "accel_set_options", 00:39:36.476 "params": { 00:39:36.476 "small_cache_size": 128, 00:39:36.476 "large_cache_size": 16, 00:39:36.476 "task_count": 2048, 00:39:36.476 "sequence_count": 2048, 00:39:36.476 "buf_count": 2048 00:39:36.476 } 00:39:36.476 } 00:39:36.476 ] 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "subsystem": "bdev", 00:39:36.476 "config": [ 00:39:36.476 { 00:39:36.476 "method": "bdev_set_options", 00:39:36.476 "params": { 00:39:36.476 "bdev_io_pool_size": 65535, 00:39:36.476 "bdev_io_cache_size": 256, 00:39:36.476 "bdev_auto_examine": true, 00:39:36.476 "iobuf_small_cache_size": 128, 00:39:36.476 "iobuf_large_cache_size": 16 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "bdev_raid_set_options", 00:39:36.476 "params": { 00:39:36.476 "process_window_size_kb": 1024, 00:39:36.476 "process_max_bandwidth_mb_sec": 0 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "bdev_iscsi_set_options", 00:39:36.476 "params": { 00:39:36.476 "timeout_sec": 30 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "bdev_nvme_set_options", 00:39:36.476 "params": { 00:39:36.476 "action_on_timeout": "none", 00:39:36.476 "timeout_us": 0, 00:39:36.476 "timeout_admin_us": 0, 00:39:36.476 "keep_alive_timeout_ms": 10000, 00:39:36.476 "arbitration_burst": 0, 00:39:36.476 "low_priority_weight": 0, 00:39:36.476 "medium_priority_weight": 0, 00:39:36.476 "high_priority_weight": 0, 00:39:36.476 "nvme_adminq_poll_period_us": 10000, 00:39:36.476 "nvme_ioq_poll_period_us": 0, 00:39:36.476 "io_queue_requests": 512, 00:39:36.476 "delay_cmd_submit": true, 00:39:36.476 "transport_retry_count": 4, 00:39:36.476 "bdev_retry_count": 3, 00:39:36.476 "transport_ack_timeout": 0, 00:39:36.476 "ctrlr_loss_timeout_sec": 0, 00:39:36.476 "reconnect_delay_sec": 0, 00:39:36.476 "fast_io_fail_timeout_sec": 0, 00:39:36.476 "disable_auto_failback": false, 00:39:36.476 "generate_uuids": false, 00:39:36.476 "transport_tos": 0, 00:39:36.476 "nvme_error_stat": false, 00:39:36.476 "rdma_srq_size": 0, 00:39:36.476 "io_path_stat": false, 00:39:36.476 "allow_accel_sequence": false, 00:39:36.476 "rdma_max_cq_size": 0, 00:39:36.476 "rdma_cm_event_timeout_ms": 0, 00:39:36.476 "dhchap_digests": [ 00:39:36.476 "sha256", 00:39:36.476 "sha384", 00:39:36.476 "sha512" 00:39:36.476 ], 00:39:36.476 "dhchap_dhgroups": [ 00:39:36.476 "null", 00:39:36.476 "ffdhe2048", 00:39:36.476 "ffdhe3072", 00:39:36.476 "ffdhe4096", 00:39:36.476 "ffdhe6144", 00:39:36.476 "ffdhe8192" 00:39:36.476 ] 00:39:36.476 } 00:39:36.476 }, 00:39:36.476 { 00:39:36.476 "method": "bdev_nvme_attach_controller", 00:39:36.476 "params": { 00:39:36.476 "name": "nvme0", 00:39:36.476 "trtype": "TCP", 00:39:36.476 "adrfam": "IPv4", 00:39:36.476 "traddr": "127.0.0.1", 00:39:36.476 "trsvcid": "4420", 00:39:36.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.476 "prchk_reftag": false, 00:39:36.476 "prchk_guard": false, 00:39:36.476 "ctrlr_loss_timeout_sec": 0, 00:39:36.476 "reconnect_delay_sec": 0, 00:39:36.476 "fast_io_fail_timeout_sec": 0, 00:39:36.476 "psk": "key0", 00:39:36.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:36.476 "hdgst": false, 00:39:36.476 "ddgst": false, 00:39:36.476 "multipath": "multipath" 00:39:36.476 } 00:39:36.477 }, 00:39:36.477 { 00:39:36.477 "method": "bdev_nvme_set_hotplug", 00:39:36.477 "params": { 00:39:36.477 "period_us": 100000, 00:39:36.477 "enable": false 00:39:36.477 } 00:39:36.477 }, 00:39:36.477 { 00:39:36.477 "method": "bdev_wait_for_examine" 00:39:36.477 } 00:39:36.477 ] 00:39:36.477 }, 00:39:36.477 { 00:39:36.477 "subsystem": "nbd", 00:39:36.477 "config": [] 00:39:36.477 } 00:39:36.477 ] 00:39:36.477 }' 00:39:36.477 [2024-11-20 10:57:08.719189] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:39:36.477 [2024-11-20 10:57:08.719244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388918 ] 00:39:36.477 [2024-11-20 10:57:08.804104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.477 [2024-11-20 10:57:08.833594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.737 [2024-11-20 10:57:08.976539] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:37.321 10:57:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.321 10:57:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:37.321 10:57:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:37.321 10:57:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.321 10:57:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:37.581 10:57:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:37.581 10:57:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:37.581 10:57:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:37.581 10:57:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:37.581 10:57:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:37.581 10:57:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:37.581 10:57:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.581 10:57:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:37.582 10:57:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:37.582 10:57:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:37.582 10:57:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:37.582 10:57:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:37.582 10:57:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:37.582 10:57:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:37.842 10:57:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:37.842 10:57:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:37.842 10:57:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:37.842 10:57:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:38.102 10:57:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:38.102 10:57:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:38.102 10:57:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dPLIAzcxn2 /tmp/tmp.uVTrHd8J1t 00:39:38.102 10:57:10 keyring_file -- keyring/file.sh@20 -- # killprocess 2388918 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2388918 ']' 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2388918 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2388918 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2388918' 00:39:38.102 killing process with pid 2388918 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@973 -- # kill 2388918 00:39:38.102 Received shutdown signal, test time was about 1.000000 seconds 00:39:38.102 00:39:38.102 Latency(us) 00:39:38.102 [2024-11-20T09:57:10.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.102 [2024-11-20T09:57:10.478Z] =================================================================================================================== 00:39:38.102 [2024-11-20T09:57:10.478Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@978 -- # wait 2388918 00:39:38.102 10:57:10 keyring_file -- keyring/file.sh@21 -- # killprocess 2386925 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2386925 ']' 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2386925 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:38.102 10:57:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386925 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386925' 00:39:38.362 killing process with pid 2386925 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@973 -- # kill 2386925 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@978 -- # wait 2386925 00:39:38.362 00:39:38.362 real 0m12.050s 00:39:38.362 user 0m29.186s 00:39:38.362 sys 0m2.651s 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.362 10:57:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:38.362 ************************************ 00:39:38.362 END TEST keyring_file 00:39:38.362 ************************************ 00:39:38.362 10:57:10 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:38.362 10:57:10 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:38.362 10:57:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:38.362 10:57:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.362 10:57:10 -- common/autotest_common.sh@10 -- # set +x 00:39:38.624 ************************************ 00:39:38.624 START TEST keyring_linux 00:39:38.624 ************************************ 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:38.624 Joined session keyring: 562906205 00:39:38.624 * Looking for test storage... 00:39:38.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.624 --rc genhtml_branch_coverage=1 00:39:38.624 --rc genhtml_function_coverage=1 00:39:38.624 --rc genhtml_legend=1 00:39:38.624 --rc geninfo_all_blocks=1 00:39:38.624 --rc geninfo_unexecuted_blocks=1 00:39:38.624 00:39:38.624 ' 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.624 --rc genhtml_branch_coverage=1 00:39:38.624 --rc genhtml_function_coverage=1 00:39:38.624 --rc genhtml_legend=1 00:39:38.624 --rc geninfo_all_blocks=1 00:39:38.624 --rc geninfo_unexecuted_blocks=1 00:39:38.624 00:39:38.624 ' 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.624 --rc genhtml_branch_coverage=1 00:39:38.624 --rc genhtml_function_coverage=1 00:39:38.624 --rc genhtml_legend=1 00:39:38.624 --rc geninfo_all_blocks=1 00:39:38.624 --rc geninfo_unexecuted_blocks=1 00:39:38.624 00:39:38.624 ' 00:39:38.624 10:57:10 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:38.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.624 --rc genhtml_branch_coverage=1 00:39:38.624 --rc genhtml_function_coverage=1 00:39:38.624 --rc genhtml_legend=1 00:39:38.624 --rc geninfo_all_blocks=1 00:39:38.624 --rc geninfo_unexecuted_blocks=1 00:39:38.624 00:39:38.624 ' 00:39:38.624 10:57:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:38.624 10:57:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.624 10:57:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.624 10:57:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.624 10:57:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.624 10:57:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.624 10:57:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:38.624 10:57:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.624 10:57:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:38.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.891 10:57:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:38.891 10:57:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:38.891 10:57:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:38.891 /tmp/:spdk-test:key0 00:39:38.891 10:57:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:38.892 10:57:11 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:38.892 10:57:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:38.892 /tmp/:spdk-test:key1 00:39:38.892 10:57:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2389811 00:39:38.892 10:57:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2389811 00:39:38.892 10:57:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2389811 ']' 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.892 10:57:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:38.892 [2024-11-20 10:57:11.144277] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:39:38.892 [2024-11-20 10:57:11.144356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389811 ] 00:39:38.892 [2024-11-20 10:57:11.232080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.196 [2024-11-20 10:57:11.269094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.793 10:57:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.793 10:57:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:39.793 10:57:11 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.793 10:57:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:39.793 [2024-11-20 10:57:11.939595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.793 null0 00:39:39.793 [2024-11-20 10:57:11.971655] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:39.793 [2024-11-20 10:57:11.972013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:39.793 10:57:11 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:39.793 294890211 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:39.793 957578161 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2390153 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2390153 /var/tmp/bperf.sock 00:39:39.793 10:57:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2390153 ']' 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.793 10:57:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:39.793 [2024-11-20 10:57:12.050703] Starting SPDK v25.01-pre git sha1 a25b16198 / DPDK 24.03.0 initialization... 00:39:39.793 [2024-11-20 10:57:12.050755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390153 ] 00:39:39.793 [2024-11-20 10:57:12.132236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.793 [2024-11-20 10:57:12.161854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.734 10:57:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.734 10:57:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:40.734 10:57:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:40.734 10:57:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:40.734 10:57:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:40.734 10:57:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:40.994 10:57:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:40.994 10:57:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:41.254 [2024-11-20 10:57:13.409801] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:41.254 nvme0n1 00:39:41.254 10:57:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:41.254 10:57:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:41.254 10:57:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:41.254 10:57:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:41.254 10:57:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:41.254 10:57:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:41.514 10:57:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:41.514 10:57:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:41.514 10:57:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@25 -- # sn=294890211 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 294890211 == \2\9\4\8\9\0\2\1\1 ]] 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 294890211 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:41.514 10:57:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:41.774 Running I/O for 1 seconds... 00:39:42.714 24662.00 IOPS, 96.34 MiB/s 00:39:42.714 Latency(us) 00:39:42.714 [2024-11-20T09:57:15.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:42.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:42.714 nvme0n1 : 1.01 24660.97 96.33 0.00 0.00 5174.69 1774.93 6389.76 00:39:42.714 [2024-11-20T09:57:15.090Z] =================================================================================================================== 00:39:42.714 [2024-11-20T09:57:15.090Z] Total : 24660.97 96.33 0.00 0.00 5174.69 1774.93 6389.76 00:39:42.714 { 00:39:42.714 "results": [ 00:39:42.714 { 00:39:42.714 "job": "nvme0n1", 00:39:42.714 "core_mask": "0x2", 00:39:42.714 "workload": "randread", 00:39:42.714 "status": "finished", 00:39:42.714 "queue_depth": 128, 00:39:42.714 "io_size": 4096, 00:39:42.714 "runtime": 1.005232, 00:39:42.714 "iops": 24660.973785156064, 00:39:42.714 "mibps": 96.33192884826587, 00:39:42.714 "io_failed": 0, 00:39:42.714 "io_timeout": 0, 00:39:42.714 "avg_latency_us": 5174.692642732284, 00:39:42.714 "min_latency_us": 1774.9333333333334, 00:39:42.714 "max_latency_us": 6389.76 00:39:42.714 } 00:39:42.714 ], 00:39:42.714 "core_count": 1 00:39:42.714 } 00:39:42.714 10:57:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:42.714 10:57:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:42.975 10:57:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:42.975 10:57:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.975 10:57:15 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:42.975 10:57:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:43.237 [2024-11-20 10:57:15.491111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:43.237 [2024-11-20 10:57:15.491544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61d480 (107): Transport endpoint is not connected 00:39:43.237 [2024-11-20 10:57:15.492540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61d480 (9): Bad file descriptor 00:39:43.237 [2024-11-20 10:57:15.493541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:43.237 [2024-11-20 10:57:15.493548] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:43.237 [2024-11-20 10:57:15.493554] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:43.237 [2024-11-20 10:57:15.493561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:43.237 request: 00:39:43.237 { 00:39:43.237 "name": "nvme0", 00:39:43.237 "trtype": "tcp", 00:39:43.237 "traddr": "127.0.0.1", 00:39:43.237 "adrfam": "ipv4", 00:39:43.237 "trsvcid": "4420", 00:39:43.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:43.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:43.237 "prchk_reftag": false, 00:39:43.237 "prchk_guard": false, 00:39:43.237 "hdgst": false, 00:39:43.237 "ddgst": false, 00:39:43.237 "psk": ":spdk-test:key1", 00:39:43.237 "allow_unrecognized_csi": false, 00:39:43.237 "method": "bdev_nvme_attach_controller", 00:39:43.237 "req_id": 1 00:39:43.237 } 00:39:43.237 Got JSON-RPC error response 00:39:43.237 response: 00:39:43.237 { 00:39:43.237 "code": -5, 00:39:43.237 "message": "Input/output error" 00:39:43.237 } 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@33 -- # sn=294890211 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 294890211 00:39:43.237 1 links removed 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@33 -- # sn=957578161 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 957578161 00:39:43.237 1 links removed 00:39:43.237 10:57:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2390153 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2390153 ']' 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2390153 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390153 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390153' 00:39:43.237 killing process with pid 2390153 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 2390153 00:39:43.237 Received shutdown signal, test time was about 1.000000 seconds 00:39:43.237 00:39:43.237 Latency(us) 00:39:43.237 [2024-11-20T09:57:15.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.237 [2024-11-20T09:57:15.613Z] =================================================================================================================== 00:39:43.237 [2024-11-20T09:57:15.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:43.237 10:57:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 2390153 00:39:43.497 10:57:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2389811 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2389811 ']' 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2389811 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2389811 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2389811' 00:39:43.497 killing process with pid 2389811 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 2389811 00:39:43.497 10:57:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 2389811 00:39:43.758 00:39:43.758 real 0m5.186s 00:39:43.758 user 0m9.623s 00:39:43.758 sys 0m1.479s 00:39:43.758 10:57:15 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:43.758 10:57:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:43.758 ************************************ 00:39:43.758 END TEST keyring_linux 00:39:43.758 ************************************ 00:39:43.758 10:57:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:43.758 10:57:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:43.758 10:57:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:43.758 10:57:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:43.758 10:57:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:43.758 10:57:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:43.758 10:57:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:43.758 10:57:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:43.758 10:57:15 -- common/autotest_common.sh@10 -- # set +x 00:39:43.758 10:57:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:43.758 10:57:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:43.758 10:57:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:43.758 10:57:15 -- common/autotest_common.sh@10 -- # set +x 00:39:51.902 INFO: APP EXITING 00:39:51.902 INFO: killing all VMs 00:39:51.902 INFO: killing vhost app 00:39:51.902 WARN: no vhost pid file found 00:39:51.902 INFO: EXIT DONE 00:39:55.205 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:55.205 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:55.205 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:59.411 Cleaning 00:39:59.411 Removing: /var/run/dpdk/spdk0/config 00:39:59.411 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:59.411 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:59.411 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:59.411 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:59.411 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:59.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:59.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:59.412 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:59.412 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:59.412 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:59.412 Removing: /var/run/dpdk/spdk1/config 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:59.412 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:59.412 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:59.412 Removing: /var/run/dpdk/spdk2/config 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:59.412 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:59.412 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:59.412 Removing: /var/run/dpdk/spdk3/config 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:59.412 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:59.412 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:59.412 Removing: /var/run/dpdk/spdk4/config 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:59.412 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:59.412 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:59.412 Removing: /dev/shm/bdev_svc_trace.1 00:39:59.412 Removing: /dev/shm/nvmf_trace.0 00:39:59.412 Removing: /dev/shm/spdk_tgt_trace.pid1812203 00:39:59.412 Removing: /var/run/dpdk/spdk0 00:39:59.412 Removing: /var/run/dpdk/spdk1 00:39:59.412 Removing: /var/run/dpdk/spdk2 00:39:59.412 Removing: /var/run/dpdk/spdk3 00:39:59.412 Removing: /var/run/dpdk/spdk4 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1810715 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1812203 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1813047 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1814091 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1814438 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1815503 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1815665 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1815978 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1817115 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1817821 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1818176 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1818513 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1818863 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1819207 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1819548 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1819898 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1820254 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1821370 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1824887 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1825103 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1825449 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1825709 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1826081 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1826331 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1826786 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1826822 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1827170 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1827486 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1827587 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1827965 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1828424 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1828778 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1829157 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1834168 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1839557 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1851638 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1852324 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1857451 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1857923 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1863143 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1870229 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1873330 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1886599 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1897477 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1899656 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1900833 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1921580 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1926462 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1982480 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1988987 00:39:59.412 Removing: /var/run/dpdk/spdk_pid1996581 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2004574 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2004656 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2005678 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2006692 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2007749 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2008355 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2008487 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2008696 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2008852 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2008860 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2009865 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2010873 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2011877 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2012547 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2012555 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2012888 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2014300 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2015407 00:39:59.412 Removing: /var/run/dpdk/spdk_pid2025395 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2059892 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2065297 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2067294 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2069503 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2069678 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2070017 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2070361 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2071075 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2073421 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2074504 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2075319 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2078361 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2079187 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2079922 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2084982 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2091648 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2091650 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2091652 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2096333 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2106509 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2111437 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2118662 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2120162 00:39:59.673 Removing: /var/run/dpdk/spdk_pid2121885 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2123528 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2129793 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2134961 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2139974 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2149079 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2149092 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2154145 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2154476 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2154802 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2155143 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2155159 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2160853 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2161374 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2166868 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2169924 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2176609 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2183150 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2193953 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2202303 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2202322 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2225374 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2226146 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2226834 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2227522 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2228584 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2229288 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2230089 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2230940 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2236111 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2236452 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2244137 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2244346 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2250899 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2256152 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2267760 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2268499 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2273554 00:39:59.674 Removing: /var/run/dpdk/spdk_pid2273908 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2278952 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2285714 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2289314 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2301466 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2312109 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2313930 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2315012 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2334715 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2339315 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2343260 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2350730 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2350850 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2356916 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2359115 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2361504 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2362824 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2365340 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2366659 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2376802 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2377375 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2377985 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2380832 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2381433 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2382105 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2386925 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2387001 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2388918 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2389811 00:39:59.934 Removing: /var/run/dpdk/spdk_pid2390153 00:39:59.934 Clean 00:39:59.934 10:57:32 -- common/autotest_common.sh@1453 -- # return 0 00:39:59.934 10:57:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:59.934 10:57:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:59.934 10:57:32 -- common/autotest_common.sh@10 -- # set +x 00:39:59.934 10:57:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:59.934 10:57:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:59.934 10:57:32 -- common/autotest_common.sh@10 -- # set +x 00:40:00.195 10:57:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:00.195 10:57:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:00.195 10:57:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:00.195 10:57:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:00.195 10:57:32 -- spdk/autotest.sh@398 -- # hostname 00:40:00.195 10:57:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:00.195 geninfo: WARNING: invalid characters removed from testname! 00:40:26.772 10:57:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:28.683 10:58:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:31.227 10:58:03 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:32.649 10:58:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:34.558 10:58:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:36.469 10:58:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:38.377 10:58:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:38.377 10:58:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:38.377 10:58:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:38.377 10:58:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:38.377 10:58:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:38.377 10:58:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:38.377 + [[ -n 1725309 ]] 00:40:38.377 + sudo kill 1725309 00:40:38.388 [Pipeline] } 00:40:38.403 [Pipeline] // stage 00:40:38.408 [Pipeline] } 00:40:38.423 [Pipeline] // timeout 00:40:38.428 [Pipeline] } 00:40:38.443 [Pipeline] // catchError 00:40:38.450 [Pipeline] } 00:40:38.466 [Pipeline] // wrap 00:40:38.472 [Pipeline] } 00:40:38.483 [Pipeline] // catchError 00:40:38.489 [Pipeline] stage 00:40:38.491 [Pipeline] { (Epilogue) 00:40:38.501 [Pipeline] catchError 00:40:38.503 [Pipeline] { 00:40:38.513 [Pipeline] echo 00:40:38.515 Cleanup processes 00:40:38.520 [Pipeline] sh 00:40:38.810 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.810 2403150 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:38.824 [Pipeline] sh 00:40:39.113 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:39.113 ++ grep -v 'sudo pgrep' 00:40:39.113 ++ awk '{print $1}' 00:40:39.113 + sudo kill -9 00:40:39.113 + true 00:40:39.126 [Pipeline] sh 00:40:39.414 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:49.433 [Pipeline] sh 00:40:49.727 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:49.727 Artifacts sizes are good 00:40:49.742 [Pipeline] archiveArtifacts 00:40:49.750 Archiving artifacts 00:40:49.959 [Pipeline] sh 00:40:50.332 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:50.347 [Pipeline] cleanWs 00:40:50.358 [WS-CLEANUP] Deleting project workspace... 00:40:50.358 [WS-CLEANUP] Deferred wipeout is used... 00:40:50.366 [WS-CLEANUP] done 00:40:50.369 [Pipeline] } 00:40:50.387 [Pipeline] // catchError 00:40:50.400 [Pipeline] sh 00:40:50.689 + logger -p user.info -t JENKINS-CI 00:40:50.700 [Pipeline] } 00:40:50.714 [Pipeline] // stage 00:40:50.720 [Pipeline] } 00:40:50.736 [Pipeline] // node 00:40:50.742 [Pipeline] End of Pipeline 00:40:50.776 Finished: SUCCESS